You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 7 months 1 javë më parë

David Tomaschik: The IoT Hacker's Toolkit

Hën, 16/04/2018 - 9:00md

Today, I’m giving a talk entitled “The IoT Hacker’s Toolkit” at BSides San Francisco. I thought I’d release a companion blog post to go along with the slide deck. I’ll also include a link to the video once it gets posted online.

Introduction

From my talk synopysis:

IoT and embedded devices provide new challenges to security engineers hoping to understand and evaluate the attack surface these devices add. From new interfaces to uncommon operating systems and software, the devices require both skills and tools just a little outside the normal security assessment. I’ll show both the hardware and software tools, where they overlap and what capabilities each tool brings to the table. I’ll also talk about building the skillset and getting the hands-on experience with the tools necessary to perform embedded security assessments.

While some IoT devices can be evaluated from a purely software standpoint (perhaps reverse engineering the mobile application is sufficient for your needs), a lot more can be learned about the device by interacting with all the interfaces available (often including ones not intended for access, such as debug and internal interfaces).

Background

I’ve always had a fascination with both hacking and electronics. I became a radio amateur at age 11, and in college, since my school had no concentration in computer security, I selected an embedded systems concentration. As a hacker, I’ve viewed the growing population of IoT devices with fascination. These devices introduce a variety of new challenges to hackers, including the security engineers tasked with evaluating these devices for security flaws:

  • Unfamiliar architectures (mostly ARM and MIPS)
  • Unusual interfaces (802.15.4, Bluetooth LE, etc.)
  • Minimal software (stripped C programs are common)

Of course, these challenges also present opportunities for hackers (white-hat and black-hat alike) who understand the systems. While finding a memory corruption vulnerability in an enterprise web application is all but unheard of, on an IoT device, it’s not uncommon for web requests to be parsed and served using basic C, with all the memory management issues that entails. In 2016, I found memory corruption vulnerabilities in a popular IP phone.

Think Capabilities, Not Toys

A lot of hackers, myself included, are “gadget guys” (or “gadget girls”). It’s hard not to look at every possible tool as something new to add to the toolbox, but at the end of the day, one has to consider how the tool adds new capabilities. It needn’t be a completely distinct capability, perhaps it offers improved speed or stability.

Of course, this is a “do as I say, not as I do” area. I, in fact, have quite a number of devices with overlapping capabilities. I’d love to claim this was just to compare devices for the benfit of those attending my presentation or reading this post, but honestly, I do love my technical toys.

Software

Much of the software does not differ from that for application security or penetration testing. For example, Wireshark is commonly used for network analysis (IP and Bluetooth), and Burp Suite for HTTP/HTTPS.

The website fccid.io is very useful in reconnaissance of devices, providing information about the frequencies and modulations used, as well as often internal pictures of devices, which can also reveal information such as chipsets, overall architecture, etc., all without lifting a screwdriver.

Reverse Engineering

Firmware images are often multiple files concatentated, or contain proprietary metadata headers. Binwalk walks the image, looking for known file signatures, and extracts the components. Often this will include entire Linux filesystems, kernel images, etc.

Once you have extracted this, you might be interested in analyzing the binaries or other software contained inside. Often a disassembler is useful. My current favorite disassembler is Binary Ninja, but there are a number of options:

Basic Tools

There’s a few tools that I consider absolutely essentially to any sort of hardware hacking exercise. These tools are fundamental to gaining an understanding of the device and accessing multiple types of interfaces on the device.

Screwdriver Set

A screwdriver set might be an obvious thing, but you’ll want one with bits that can get into tight places, are appropriately sized to the screws on your device (using the wrong size Phillips screwdriver bit is one of the easiest ways to strip a screw). Many devices also use “security screws”, which seems to be a term applied to just about any screw that doesn’t come in your standard household tool kit. (I’ve seen Torx, triangle bits, square bits, Torx with a center pin, etc.)

I have a wonderful driver kit from iFixit, and I’ve found almost nothing that it won’t open. The extension driver helps get into smaller spaces, and the 64 bits cover just about everything. I personally like to support iFixit because they have great write-ups and tear downs, but there are also cheaper clones of this toolkit.

Openers

Many devices are sealed with plastic catches or pieces that are press-fit together. For these, you’ll need some kind of opener (sometimes called a “spudger”) to pry them apart. I find a variety of shapes useful. You can get this a as part of a combined tool kit from iFixit, iFixit clones, or openers by themselves. I have found the iFixit model to be of slightly higher quality, but I also carry a cheap clone for occassional travel use.

The very thin metal one with a plastic handle is probably my favorite opener – it fits into the thinnest openings, but consequently it also bends fairly easily. I’ve been through a few due to bending damage. Be careful how you use these tools, and make sure your hand is not where they will go if they slip! They are not quite razor-blade sharp, but they will cut your hand with a bit of force behind them.

Multimeter

I get it, you’re looking to hack the device, not rewire your car. That being said, for a lot of tasks, a halfway decent multimeter is somewhere between an absolute requirement and a massive time saver. Some of the tasks a multimeter will help with include:

  • Identifying unknown pinouts
  • Find the ground pin for a UART
  • Checking which components are connected
  • Figuring out what kind of power supply you need
  • Checking the voltage on an interface to make sure you don’t blow something up

I have several multimeters (more than one is important for electronics work), but you can get by with a single one for your IoT hacking projects. The UNI-T UT-61E is a popular model at a good price/performance ratio, but its safety ratings are a little optimistic. The EEVBlog BM235 is my favorite of my meters, but a little higher end (aka expensive). If you’re buying for work, the Fluke 87V is the “golden standard” of multimeters.

If you buy a cheap meter, it will probably work for IoT projects, but there are many multimeters that are unsafe out there. Please do not use these cheap meters on “mains” electricity, high voltage power supplies, anything coming out of the wall, etc. Your personal safety is not worth saving $40.

Soldering Iron

You will find a lot of unpopulated headers (just the holes in the circuit board) on production IoT devices. The headers for various debug interfaces are left out, either as a cost savings, or for space reasons, or perhaps both. The headers were used during the development process, but often the manufacturer wants to leave the connections either to avoid redoing the printed circuit board (PCB) layout, or to be able to debug failures in the field.

In order to connect to these unpopulated headers, you will want to solder your own headers in their place. To do so, you’ll need a soldering iron. To minimize the risk of damaging the board in the process, use a soldering iron with a variable temperature and a small tip. The Hakko FX-888D is very popular and a very nice option, but you can still do good work with something like this Aoyue or other options. Just don’t use a soldering iron designed for a plumber or similiar uses – you’ll just end up burning the board.

Likewise, you’ll want to practice your soldering skills before you start work on your target board – find some small soldering projects to practice on, or some through away scrap electronics to work on.

Network Interfaces

Obviously, these devices have network interfaces. After all, they are the “Internet of Things”, so a network connection would seem to be a requirement. Nearly universally, 802.11 connectivity is present (sometimes on just a base station), and ethernet (10/100 or Gigabit) interfaces are also very common.

Wired Network Sniffing

The easiest way to sniff a wired network is often a 2nd interface on your computer. I’m a huge fan of this USB 3.0 to Dual Gigabit Adapter, which even has a USB-C version for those using one of the newer laptops or Macbooks that only support USB-C. Either option gives you two network ports to work with, even on laptops without built-in wired interfaces.

Beyond this, you’ll need software for the sniffing. Wireshark is an obvious tool for raw packet capture, but you’ll often also want HTTP/HTTPS sniffing, for which Burp Suite is the defacto standard, but mitmproxy is an up-and-coming contender with a lot of nice features.

Wireless Network Sniffing

Most common wireless network interfaces on laptops can perform monitor mode, but perhaps you’d like to connect your wireless to use the internet, as well as sniff on another interface. Alfa wireless cards like the AWUSO36NH and the AWUSO36ACH have been quite popular for a while, but I personally like using the tiny RT5370-based adapters for assessments not requiring long range due to its compact size and portability.

Wired (Debug/Internal) Interfaces

There are many subtle interfaces on IoT devices, intended for either debug use, or for various components to communicate with each other. For example:

  • SPI/I2C for flash chips
  • SPI/SD for wifi chips
  • UART for serial consoles
  • UART for bluetooth/wifi controllers
  • JTAG/SWD for debugging processors
  • ICSP for In-Circuit Programming
UART

Though there are many universal devices that can do other things, I run into UARTs so often that I like having a standalone adapter for this. Additionally, having a standalone adapter allows me to maintain a UART connection at the same time as I’m working with JTAG/SWD or other interfaces.

You can get a standalone cable for around $10, that can be used for most UART interfaces. (On most devices I’ve seen, the UART interface is 3.3v, and these cables work well for that.) Most of these cables have the following pinout, but make sure you check your own:

  • Red: +5V (Don’t connect on most boards)
  • Black: GND
  • Green: TX from Computer, RX from Device
  • White: RX from Computer, TX from Device

There are also a number of breakouts for the FT232RL or the CH340 chips for UART to USB. These provide a row of headers to connect jumpers between your target device and the adapter. I prefer the simplicity of the cables (and fewer jumper ends to come loose during my testing), but this is further evidence that there are a number of options to provide the same capabilities.

Universal Interfaces (JTAG/SWD/I2C/SPI)

There are a number of interface boards referred to as “universal interfaces” that have the capability to interface with a wide variety of protocols. These largely fit into two categories:

  • Bit-banging microcontrollers
  • Hardware interfaces (dominated by the FT*232 series from FTDI)

There are a number of options for implementing a bit-banging solution for speaking these protocols, ranging from software projects to run on an Arduino, to projects like the Bus Pirate, which uses a PIC microcontroller. These generally present a serial interface (UART) to the host computer and applications, and use in-band signalling for configuration and settings. There may be some timing issues on certain devices, as microcontrollers often cannot update multiple output pins in the same clock cycle.

Hardware interfaces expose a dedicated USB endpoint to talk to the device, and though this can be configured, it is done via USB endpoints and registers. The protocols are implemented in semi-dedicated hardware. In my experience, these devices are both faster and more reliable than bit-banging microcontrollers, but you are limited to whatever protocols are supported by the particular device, or the capabilities of the software to drive them. (For example, the FT*232H series can do most protocols via bit-banging, but it updates an entire register at a time, and has high enough speed to run the clock rate of many protocols.)

The FT2232H and FT232H (not to be confused with the FT232RL, which is UART only), in particular, has been incorporated into a number of different breakout boards that are excellent universal interfaces:

Logic Analyzer

When you have an unknown protocol, unknown pinout, or unknown protocol settings (baudrate, polarity, parity, etc.), a logic analyzer can dramtically help by allowing you a direct look at the signals being passed between chips or interfaces.

I have a Saleae Logic 8, which is a great value logic analyzer. It has a compact size and their software is really excellent and easy to use. I’ve used it to discover the pinout for many unlabeled ports, discover the settings for UARTs, and just generally snoop on traffic between two chips on a board.

Though there are cheap knock-offs available on eBay or AliExpress, I have tried them and they have very poor quality, and unfortunately the open-source sigrok software is not quite the quality of the Saleae software. Additionally, they rarely have any input protection to prevent you from blowing up the device yourself.

Wireless

Obviously, the Internet of Things has quite a number of wireless devices. Some of these devices use WiFI (discussed above) but many use other wireless protocols. Bluetooth (particularly Bluetooth LE) is quite common, but in other areas, such as home automation, other protocols prevail. Many of these are based on 802.15.4 (Zigbee, Z-Wave) or proprietary protocols in the 433 MHz, 915 MHz, or 2.4 GHz ISM bands.

Bluetooth

Bluetooth devices are incredibly common, and Bluetooth Low Energy (starting with Bluetooth 4.0) is very popular for IoT devices. Most devices that do not stream audio, provide IP connectivity, or have other high-bandwidth needs seem to be moving to Bluetooth Low Energy, probably because of several reasons:

  1. Lower power consumption (battery friendly)
  2. Cheaper chipsets
  3. Less complex implementation

There is essentially only one tool I can really recommend for assessing Bluetooth, and that is the Ubertooth One (Amazon). This can follow and capture Bluetooth communications, providing output in pcap or pcap-ng format, allowing you to import the communications into Wireshark for later analysis. (You can also use other pcap-based tools like scapy for analysis of the resulting pcaps.) The Ubertooth tools are available in Debian, Ubuntu, or Kali as packages, but you can get a more up to date version of the software from their Github repository.

Adafruit also offers a BLE Sniffer which works only for Bluetooth Low Energy and utilizes a Nordic Semiconductor BLE chip with a special firmware for sniffing. The software for this works well on Windows, but not so well on Linux where it is a python script that tends to be more difficult to use than the Ubertooth tools.

Software Defined Radio

For custom protocols, or to enable lower-level evaluation or attacks of radio-based systems, Software Defined Radio presents an excellent opportunity for direct interaction with the RF side of the IoT device. This can range from only receiving (for purposes of understanding and reverse engineering the device) to being able to simultaneously receive and transmit (full-duplex) depending upon the needs of your assessment.

For simply receiving, there are simple DVB-T dongles that have been repurposed as general-purpose SDRs, often referred to as “RTL SDRs”, a name based on the Realtek RTL2832U chips present in the device. These can be used because the chip is capable of providing the raw samples to the host operating system, and because of their low cost, a large open source community has emerged. Companies like NooElec are now even offering custom built hardware based on these chips for the SDR community. There’s also a kit that expands the receive range of the RTL-SDR dongles.

In order to transmit as well, the hardware is significantly more complex, and most options in this space are driven by an FPGA or other powerful processor. Even a few years ago, the capabilities here were very expensive with tools like the USRP. However, the HackRF by Great Scott Gadgets and the BladeRF by Nuand have offered a great deal of capability for a hacker-friendly price.

I personally have a BladeRF, but I honestly wish I had bought a HackRF instead. The HackRF has a wider usable frequency range (especially at the low end), while the BladeRF requires a relatively expensive upconverter to cover those bands. The HackRF also seems to have a much more active community and better support in some areas of open source software.

Other Useful Tools

It is occasionally useful to use an oscilloscope to see RF signals or signal integrity, but I have almost never found this necessary.

Specialized JTAG programmers for specific hardware often work better, but cost quite a bit more and are specialized to those specific items.

For dumping Flash chips, Xeltec programmers/dumpers are considered the “top of the line” and do an incredible job, but are at a price point such that only labs doing this on a regular basis find it worthwhile.

Slides

PDF: The IoT Hacker’s Toolkit

Lubuntu Blog: This Week in Lubuntu Development #3

Hën, 16/04/2018 - 6:45md
Here is the third issue of This Week in Lubuntu Development. You can read last week's issue here. Changes General Some work was done on the Lubuntu Manual by Lubuntu contributor Lyn Perrine. Here's what she has been working on: Start page for Evince. Start docs for the Document Viewer. Start work on the GNOME […]

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, March 2018

Hën, 16/04/2018 - 4:07md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 214 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 26. Thanks to a few extra hours dispatched this month (accumulated backlog of a contributor), the number of open issues came back to a more usual value.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Elizabeth K. Joseph: SCaLE16x with Ubuntu, CI/CD and more!

Pre, 13/04/2018 - 10:49md

Last month I made my way down to Pasadena for one of my favorite conferences of the year, the Southern California Linux Expo. Like most years, I split my time between Ubuntu and stuff I was working on for my day job. This year that meant doing two talks and attending UbuCon on Thursday and half of Friday.

As with past years, UbuCon at SCALE was hosted by Nathan Haines and Richard Gaskin. The schedule this year was very reflective about the history and changes in the project. In a talk from Sriram Ramkrishna of System76 titled “Unity Dumped Us! The Emotional Healing” he talked about the closing of development on the Unity desktop environment. System76 is primarily a desktop company, so the abrupt change of direction from Canonical took some adjusting to and was a little painful. But out of it came their Ubuntu derivative Pop!_OS and a community around it that they’re quite proud of. In the talk “The Changing Face of Ubuntu” Nathan Haines walked through Ubuntu history to demonstrate the changes that have happened within the project over the years, and allow us to look at the changes today with some historical perspective. The Ubuntu project has always been about change. Jono Bacon was in the final talk slot of the event to give a community management talk titled “Ubuntu: Lessons Learned”. Another retrospective, he drew from his experience when he was the Ubuntu Community Manager to share some insight into what worked and what didn’t in the community. Particularly noteworthy for me were his points about community members needing direction more than options (something I’ve also seen in my work, discrete tasks have a higher chance of being taken than broad contribution requests) and the importance of setting expectations for community members. Indeed, I’ve seen that expectations are frequently poorly communicated in communities where there is a company controlling direction of the project. A lot of frustration could be alleviated by being more clear about what is expected from the company and where the community plays a role.


UbuCon group photo courtesy of Nathan Haines (source)

The UbuCon this year wasn’t as big as those in years past, but we did pack the room with nearly 120 people for a few talks, including the one I did on “Keeping Your Ubuntu Systems Secure”. Nathan Haines suggested this topic when I was struggling to come up with a talk idea for the conference. At first I wasn’t sure what I’d say, but as I started taking notes about what I know about Ubuntu both from a systems administration perspective with servers, and as someone who has done a fair amount of user support in the community over the past decade, it turned out that I did have an entire talk worth of advice! None of what I shared was complicated or revolutionary, there was no kernel hardening in my talk or much use of third party security tools. Instead the talk focused on things like keeping your system updated, developing a fundamental understanding of how your system and Debian packages work, and tips around software management. The slides for my presentation are pretty wordy, so you can glean the tips I shared from them: Keeping_Your_Ubuntu_Systems_Secure-UbuConSummit_Scale16x.pdf.


Thanks to Nathan Haines for taking this photo during my talk (source)

The team running Ubuntu efforts at the conference rounded of SCALE by staffing a booth through the weekend. The Ubuntu booths have certainly evolved over the years, when I ran them it was always a bit cluttered and had quite the grass roots feeling to it (the booth in 2012). The booths the team put together now are simpler and more polished. This is definitely in line with the trend of more polished open source software presence in general, so kudos to the team for making sure our little Ubuntu California crew of volunteers keeps up.

Shifting over to the more work-focused parts of the conference, on Friday I spoke at Container Day, with my talk being the first of the day. The great thing about going first is that I get to complete my talk and relax for the rest of the conference. The less great thing about it is that I get to experience all the A/V gotchas and be awake and ready to give a talk at 9:30AM. Still, I think the pros outweighed the cons and I was able to give a refresh of my “Advanced Continuous Delivery Strategies for Containerized Applications Using DC/OS” talk, which included a new demo that I finished writing the week before. The talk seemed to generate interest that led to good discussions later in the conference, and to my relief the live demo concluded without a problem. Slides from the talk can be found here: Advanced_CD_Using_DCOS-SCALE16x.pdf


Thanks to Nathan Handler for taking this photo during my talk (source)

Saturday and Sunday brought a duo of keynotes that I wouldn’t have expected at an open source conference five years ago, from Microsoft and Amazon. In both these keynotes the speaker recognized the importance of open source today in the industry, which has fueled the shift in perspective and direction regarding open source for these companies. There’s certainly a celebration to be had around this, when companies are contributing to open source because it makes business sense to do so, we all benefit from the increased opportunities that presents. On the other hand, it has caused disruption in the older open source communities, and some have struggled to continue to find personal value and meaning in this new open source world. I’ve been thinking a lot about this since the conference and have started putting together a talk about it, nicely timed for the 20th anniversary of the “open source” term. I want to explore how veteran contributors stay passionate and engaged, and how we can bring this same feeling to new contributors who came down different paths to join open source communities.

Regular talks began on Saturday with me attending Nathan Handler’s talk on “Terraforming all the things” where he shared some of the work they’ve been doing at Yelp that has resulted in the handling of things like DNS records and CDN configuration being handled by Terraform. From there I went to a talk by Brian Proffitt where he talked about metrics in communities and the Community Health Analytics Open Source Software (CHAOOS) project. I spent much of the rest of the day in the “hallway track” catching up with people, but at the end I popped into a talk by Steve Wong on “Running Containerized Workloads in an on-prem Datacenter” where he discussed the role that bare metal continues to have in the industry, even as many rush to the cloud for a turnkey solution.

It was at this talk where I had the pleasure of meeting one of our newest Account Executives at Mesosphere, Kelly Bond, and also had some time to catch up with my colleague Jörg Schad.


Jörg, me, Kelly

Nuritzi Sanchez presented my favorite talk on Sunday, on Endless OS. They build a Linux distribution using FlatPak and as an organization work on the problem of access to technology in developing nations. I’ve long been concerned about cellphone-only access in these countries. You need a mix of a system that’s tolerant to being offline and that has input devices (like keyboards!) that allow work to be done on them. They’re doing really interesting work on the technical side related to offline content and general architecture around a system that needs to be conscious of offline status, but they’re also developing deployment strategies on the ground in places like Indonesia that will ensure the local community can succeed long term. I have a lot of respect for the people working toward all this, and really want to see this organization succeed.

I’m always grateful to participate in this conference. It’s grown a lot over the years and it certainly has changed, but the autonomy given to the special events like UbuCon allows for a conference that brings together lots of different voices and perspective all in one place. I also have a lot of friends who attend this conference, many of whom span jobs and open source projects I’ve worked on over more than a decade. Building friendships and reconnecting with people is part of what makes the work I do in open source so important to me, and not just a job for me. Thanks to everyone who continues to make this possible year after year in beautiful Pasadena.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157693153653781

Simon Raffeiner: I went to Fukushima

Pre, 13/04/2018 - 1:36md

I'm an engineer and interested in all kinds of technology, especially if it is used to build something big. But I'm also fascinated by what happens when things suddenly change and don't go as expected, and especially by everything that's left behind after technological and social revolutions or disasters. In October 2017 I travelled across Japan and decided to visit one of the places where technology had failed in the worst way imaginable: the Fukushima Evacuation Zone.

The post I went to Fukushima appeared first on LIEBERBIBER.

Kees Cook: security things in Linux v4.16

Pre, 13/04/2018 - 2:04pd

Previously: v4.15

Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting:

KPTI on arm64

Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs.

hardened usercopy whitelisting
While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct.

As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing.

Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720

This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.)

automatic stack-protector

While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense.

After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks.

That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. :)

Edit: added details on ARM register leaks, thanks to Daniel Micay.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Ubuntu Podcast from the UK LoCo: S11E06 – Six Feet Over It - Ubuntu Podcast

Enj, 12/04/2018 - 4:15md

This week we review the Dell XPS 13 (9370) Developer Edition laptop, bring you some command line lurve and go over all your feedback.

It’s Season 11 Episode 06 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Ante Karamatić: Spaces – uncomplicating your network

Enj, 12/04/2018 - 6:44pd
An old OpenStack network architecture

For past 5-6 years I’ve been in business of deploying cloud solutions for our customers. Vast majority of that was some form of OpenStack, either a simple cloud or a complicated one. But when you think about it – what is a simple cloud? It’s easy to say that small amount of machines makes an easy, and large amount of machines makes a complicated cloud. But, that is not true. Complexity of a typical IaaS solution is pretty much determined by network complexity. Network, in all shapes and forms, from the underlay network to the customer’s overlay network requirements. I’ll try to explain how we deal with the underlay part in this blog.

It’s not a secret that a traditional tree like network architecture just doesn’t work for cloud environments. There are multiple reasons why; it doesn’t scale very well, it requires big OSI layer 2 domains and… well, it’s based on OSI layer 2. Debugging issues on that level is never a joyful experience. Therefore, for IaaS environments one really wants to do a modern design in a form of a spine-leaf architecture. Layer 3 spine-leaf architecture. This allows us to have bunch of smaller layer 2 domains, which then nicely correlate to availability zones, power zones, etc. However, managing environments with multiple layer 2 and therefore even more layer 3 domains requires a bit of rethinking. If we truly want to be effective in deploying and operating a cloud across multiple different layer 2 domains we need to think of the network in a bit more abstract mode. Luckily, this is nothing new.

In traditional approach to network, we talk about TORs, management fabric, BMC/OOB fabric, etc. These are most of the time layer 2 concepts. Fabric, after all, is a collection of switches. But the approach is correct; we should always talk about networks in abstract terms. Instead of talking about subnets and VLANs, we should talk about purpose of the network. This becomes important when we talk about spine-leaf architecture and multiple different subnets that serve the same purpose. In rack 1, subnet 172.16.1.0/24 is management network, but in rack 2, management network is on subnet 192.168.1.0/24, and so on. It’s obvious that it’s much nicer to abstract those subnets into a ‘management network’. Still, nothing new. We do this every day.

So… Why do our tools and applications still require us to use VLANs, subnets and IPs? If we deploy same application across different racks, why do we have to keep separate configurations for each of the units of the same application? What we really want is to have all of our Keystones listening on OpenStack Public API network, and not on subnets 192.168.10.0/24, 192.168.20.0/24 and 192.168.30.0/24. We end up thinking about application on a network, but we configure differently exact copies of the same application (units) on different subnets. Clearly our configuration tools are not doing what we want, but rather forcing us to transform our way of thinking into what those tools need. It’s a paradox that OpenStack is not that complicated, rather it’s made complicated by the tools used to deploy it.

While trying to solve this problem in our deployments at Canonical, we came up with concept of spaces. A space would be this abstracted network that we have in our heads, but somehow fail to put into our tools. Again, spaces are not a revolutionary concept in networking, they have been in our heads all this time. So, how do we implement spaces at Canonical?

We have grown concept of spaces across all of our tooling; MAAS, juju and charms. When we configure MAAS to manage our bare metal machines, we do not define networks as subnets or VLANs, we rather define networks as spaces. A space has a purpose, description and few other attributes. VLANs, and indirectly subnets too, become properties of the space, instead of other way around. This also means that when we deploy a machine, we deploy it connected to a space. When we deploy a machine, we usually do not deploy it on a specific network, but rather with specific requirements; must be able to talk to X, must have Y CPU and Z RAM. If you ever asked yourself why does it take so much time to rack and stack a server, it’s because of this disconnect of what we want and how we handle the configuration.

We’ve also enabled Juju to make this kind of requests – it asks MAAS for machines that is connected to a space, or set of spaces. It then exposes this spaces to charms, so that each charm knows what kind of networks this application has on its disposal. This allows us to do ‘juju deploy keystone –bind public=public-space -n3’; deploy three keystones, connect them to a public-space, a space defined in MAAS. What VLAN will that be, which subnet or an IP, we do not care; the charm will get information from Juju about these “low level” terms (VLANs, IPs). We humans do not think of VLANs and subnets and IPs; at best we think in OSI layer 1 terms.

Sounds a bit complicated? Let’s flip it the other way around. What I can do now is define my application as “3 units of keystone, which use internal network for SQL, public network for exposing API, internal network for OpenStack’s internal communication and is also exposed on OAM network for management purposes” and this is precisely how we deploy OpenStack. In fact, the Juju bundle looks like this:

keystone:
  charm: cs:keystone
  num_units: 3
  bindings:
    "": oam-space
    public: public-space
    internal: internal-space
    shared-db: internal-space

Those who follow OpenStack development will notice that something similar has landed in OpenStack recently; routed provider networks. It’s the same concept, solving the same problem. It’s nice to see how Juju uses this out of the box.

Big thanks to MAAS, Juju, charms and OpenStack communities for doing this. It allowed us to deploy complex applications with a breeze, and therefore shifted our focus to bigger picture, IaaS modeling and some other, new challenges!

Launchpad News: Launchpad security advisory: cross-site-scripting in site search

Mër, 11/04/2018 - 10:40pd
Summary

Mohamed Alaa reported that Launchpad’s Bing site search implementation had a cross-site-scripting vulnerability.  This was introduced on 2018-03-29, and fixed on 2018-04-10.  We have not found any evidence of this bug being actively exploited by attackers; the rest of this post is an explanation of the problem for the sake of transparency.

Details

Some time ago, Google announced that they would be discontinuing their Google Site Search product on 2018-04-01.  Since this served as part of the backend for Launchpad’s site search feature (“Search Launchpad” on the front page), we began to look around for a replacement.  We eventually settled on Bing Custom Search, implemented appropriate support in Launchpad, and switched over to it on 2018-03-29.

Unfortunately, we missed one detail.  Google Site Search’s XML API returns excerpts of search results as pre-escaped HTML, using <b> tags to indicate where search terms match.  This makes complete sense given its embedding in XML; it’s hard to see how that API could do otherwise.  The Launchpad integration code accordingly uses TAL code along these lines, using the structure keyword to explicitly indicate that the excerpts in question do not require HTML-escaping (like most good web frameworks, TAL’s default is to escape all variable content, so successful XSS attacks on Launchpad have historically been rare):

<div class="summary" tal:content="structure page/summary" />

However, Bing Custom Search’s JSON API returns excerpts of search results without any HTML escaping.  Again, in the context of the API in question, this makes complete sense as a default behaviour (though a textFormat=HTML switch is available to change this); but, in the absence of appropriate handling, this meant that those excerpts were passed through to the TAL code above without escaping.  As a result, if you could craft search terms that match a portion of an existing page on Launchpad that shows scripting tags (such as a bug about an XSS vulnerability in another piece of software hosted on Launchpad), and convince other people to follow a suitable search link, then you could cause that code to be executed in other users’ browsers.

The fix was, of course, to simply escape the data returned by Bing Custom Search.  Thanks to Mohamed Alaa for their disclosure.

Jorge Castro: Kubernetes Ask Me Anything on Reddit

Mar, 10/04/2018 - 2:00pd

A bunch of Kubernetes developers are doing an Ask Me Anything today on Reddit if you’re interested in asking any questions, hope to see you there!

Lubuntu Blog: This Week in Lubuntu Development #2

Hën, 09/04/2018 - 6:00md
Here is the second issue of This Week in Lubuntu Development. You can read last week's issue here. Changes General We released 18.04 Final Beta this week. You can find the announcement here. The encrypted LVM bug we described last week has been fixed (thanks to Steve Langasek!). We are still working hard to try […]

Nathan Haines: Announcing the Ubuntu 18.04 LTS Free Culture Showcase winners

Dje, 08/04/2018 - 9:00pd

In just under 3 weeks, Ubuntu 18.04 LTS launches. This exciting new release is a new Long Term Support release and will introduce many Ubuntu users to GNOME Shell and a closer upstream experience. In addition, Ubuntu developers have been working long and hard to ensure that 18.04 is a big, brilliant release that builds a bridge from 16.04 LTS to a better, bigger platform that can be built upon, without becoming unnecessarily boisterous.

As with each Ubuntu release, 18.04 showcases community artwork with bravado. Thanks to the Ubuntu Free Culture Showcase, we have 12 new wallpapers that will ship with the release:

And since this is an LTS, we’re refreshing the example content on the install media. Not only can you test your graphics and audio hardware for compatibility, but with entertaining media as well:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper, video entry ,or song. You’ll find this media on your Ubuntu desktop after you upgrade or install Ubuntu 18.04 LTS on April 26th!

The Fridge: Ubuntu 18.04 LTS (Bionic Beaver) Final Beta released

Pre, 06/04/2018 - 3:07md
The Ubuntu team is pleased to announce the final beta release of the Ubuntu 18.04 LTS Desktop, Server, and Cloud products. Codenamed "Bionic Beaver", 18.04 LTS continues Ubuntu's proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs. This beta release includes images from not only the Ubuntu Desktop, Server, and Cloud products, but also the Kubuntu, Lubuntu, Ubuntu Budgie, UbuntuKylin, Ubuntu MATE, Ubuntu Studio, and Xubuntu flavours. The beta images are known to be reasonably free of showstopper CD build or installer bugs, while representing a very recent snapshot of 18.04 that should be representative of the features intended to ship with the final release expected on April 26th, 2018. Ubuntu, Ubuntu Server, Cloud Images: Bionic Final Beta includes updated versions of most of our core set of packages, including a current 4.15 kernel, and much more. To upgrade to Ubuntu 18.04 Final Beta from Ubuntu 17.10, follow these instructions: https://help.ubuntu.com/community/BionicUpgrades The Ubuntu 18.04 Final Beta images can be downloaded at: http://releases.ubuntu.com/18.04/ (Ubuntu and Ubuntu Server on x86) This Ubuntu Server image features the next generation Subiquity server installer, bringing the comfortable live session and speedy install of the Ubuntu Desktop to server users at last. This new installer does not support the same set of installation options as the previous server installer, so the "debian-installer" image continues to be made available in parallel. For more information about the installation options, please see: https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes#Ubuntu_Server Additional images can be found at the following links: http://cloud-images.ubuntu.com/daily/server/bionic/current/ (Cloud Images) http://cdimage.ubuntu.com/releases/18.04/beta-2/ (Non-x86, and d-i Server) http://cdimage.ubuntu.com/netboot/18.04/ (Netboot) As fixes will be included in new images between now and release, any daily cloud image from today or later (i.e. a serial of 20180404 or higher) should be considered a beta image. Bugs found should be filed against the appropriate packages or, failing that, the cloud-images project in Launchpad. The full release notes for Ubuntu 18.04 Final Beta can be found at: https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes Kubuntu: Kubuntu is the KDE based flavour of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project. The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/kubuntu/releases/18.04/beta-2/ More information on Kubuntu Final Beta can be found here: https://wiki.ubuntu.com/BionicBeaver/Beta2/Kubuntu Lubuntu: Lubuntu is a flavor of Ubuntu that targets to be lighter, less resource hungry and more energy-efficient by using lightweight applications and LXDE, The Lightweight X11 Desktop Environment, as its default GUI. The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/lubuntu/releases/18.04/beta-2/ Ubuntu Budgie: Ubuntu Budgie is community developed desktop, integrating Budgie Desktop Environment with Ubuntu at its core. The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/ubuntu-budgie/releases/18.04/beta-2/ More information on Ubuntu Budgie Final Beta can be found here: https://ubuntubudgie.org/blog/2018/04/03/18-04-beta-2 UbuntuKylin: UbuntuKylin is a flavor of Ubuntu that is more suitable for Chinese users. The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/ubuntukylin/releases/18.04/beta-2/ More information on UbuntuKylin Final Beta can be found here: https://wiki.ubuntu.com/BionicBeaver/Beta2/UbuntuKylin Ubuntu MATE: Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop environment. The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/ubuntu-mate/releases/18.04/beta-2/ Ubuntu Studio: Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key workflows: audio, graphics, video, photography and publishing. The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/ubuntustudio/releases/18.04/beta-2/ More information about Ubuntu Studio Final Beta can be found here: https://wiki.ubuntu.com/BionicBeaver/Beta2/UbuntuStudio Xubuntu: Xubuntu is a flavor of Ubuntu that comes with Xfce, which is a stable, light and configurable desktop environment. The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/xubuntu/releases/18.04/beta-2/ More information about Xubuntu Final Beta can be found here: http://wiki.xubuntu.org/releases/18.04/release-notes Regular daily images for Ubuntu, and all flavours, can be found at: http://cdimage.ubuntu.com Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away. Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit http://www.ubuntu.com/support If you would like to help shape Ubuntu, take a look at the list of ways you can participate at: http://www.ubuntu.com/community Your comments, bug reports, patches and suggestions really help us to improve this and future releases of Ubuntu. Instructions can be found at: https://help.ubuntu.com/community/ReportingBugs You can find out more about Ubuntu and about this beta release on our website, IRC channel and wiki. To sign up for future Ubuntu announcements, please subscribe to Ubuntu's very low volume announcement list at: http://lists.ubuntu.com/mailman/listinfo/ubuntu-announce https://lists.ubuntu.com/archives/ubuntu-announce/2018-April/000230.html Originally posted to the Ubuntu Release mailing list on Fri Apr 6 06:02:21 UTC 2018 by Steve Langasek, on behalf of the Ubuntu Release Team

Ubuntu MATE: Ubuntu MATE 18.04 Beta 2

Pre, 06/04/2018 - 10:15pd

Yeah baby! You know you want some of what we've got. Come and have a fling with Ubuntu MATE 18.04 Beta 2.

We are preparing Ubuntu MATE 18.04 (Bionic Beaver) for distribution on April 26th, 2018 With this Beta pre-release, you can see what we are trying out in preparation for our next (stable) version.


What works?

People tell us that Ubuntu MATE is stable. You may, or may not, agree.

Ubuntu MATE Beta Releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Ubuntu MATE Beta Releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Ubuntu MATE, MATE, and GTK+ developers
What changed since the Ubuntu MATE 17.10 final release?

We've been refining Ubuntu MATE since the 17.10 release and making improvements to ensure that Ubuntu MATE offers what our users want today and what they'll need over the life of this LTS release. This is what's changed since 17.10.

MATE Desktop 1.20

As you may have seen, MATE Desktop 1.20 was released in February 2018 and offers some significant improvements:

  • MATE Desktop 1.20 supports HiDPI displays with dynamic detection and scaling.
    • HiDPI hints for Qt applications are also pushed to the environment to improve cross toolkit integration.
    • Toggling HiDPI modes triggers dynamic resize and scale, no log out/in required.
  • Marco now supports DRI3 and Present, if available.
    • Frame rates in games are significantly increased when using Marco.
  • Marco now supports drag to quadrant window tiling, cursor keys can be used to navigate the Alt + Tab switcher and keyboard shortcuts to move windows to another monitor were added.

If your hardware/drivers support DRI3 then Marco compositing is now hardware accelerated. This dramatically improves 3D rendering performance, particularly in games. If your hardware doesn't support DRI3 then Marco will fallback to a software compositor.

You can read the release announcement to discover everything that improved in MATE Desktop 1.20. It is a significant release that also includes a considerable number of bug fixes.

Since 18.04 beta 1 upstream released MATE Desktop 1.20.1 and the following updates have recently landed in Ubuntu MATE:

  • mate-control-center 1.20.2
  • marco 1.20.1
  • mate-desktop 1.20.1
  • atril 1.20.1
  • mate-power-manager 1.20.1
  • mate-panel 1.20.1
  • mate-settings-daemon 1.20.1
  • pluma 1.20.1
  • mate-applets 1.20.1
  • mate-calc 1.20.1
  • libmatekbd 1.20.1
  • caja 1.20.1
  • mate-sensors-applet 1.20.1

These roll up a collection of fixes, many of which Ubuntu MATE was already carrying patch sets for. The most notable change is that Marco is now fully HiDPI aware and windows controls are scaled correctly.

New and updated desktop layouts - new since 18.04 beta 1

I have decided to add a new layout to the collection available in Ubuntu MATE 18.04. It will be called Familiar and is based on the Traditional layout with the menu-bar (Applications, Places, System) replaced by Brisk Menu. It looks like this:


Familiar is now the the default layout, Traditional will continue to be shipped, unchanged, and will be available via MATE Tweak for those who prefer it.

Since 18.04 beta 1 the Netbook layout has been updated, maximised windows now maximise into the top panel like the Mutiny layout. Brisk Menu replaces the custom-menu and mate-dock-applet is used for application pinning and launching. When maximising a window this offers some decent space savings.

Since 18.04 beta 1 the Mutiny layout has been tweaked so the launcher icon is the same size of the docked application icons. We heard you, we understand. It's the little things, right?

Global Menu and MATE HUD

The Global Menu integration is much improved. When the Global Menu is added to a panel the application menus are automatically removed from the application window and only presented globally, no additional configuration (as was the case) is required. Likewise removing the Global Menu from a panel will restore menus to their application windows.


The HUD now has a 250ms (default) timeout, holding Alt any longer won't trigger the HUD. This is consistent with how the HUD in Unity 7 works. We've fixed a number of issues reported by users of Ubuntu MATE 17.10 regarding the HUD swallowing key presses. The HUD is also HiDPI aware now.

Ubuntu MATE Welcome - new since 18.04 beta 1

Welcome and Boutique have been given some love.

  • The software listings in the Boutique have been refreshed, with some applications being removed, many updated and some new additions.
  • Welcome now has snappier animations and transitions
Indicators by default

Ubuntu MATE 18.04 uses Indicators by default in all layouts. These will be familiar to anyone who has used Unity 7 and offer better accessibility support and ease of use over notification area applets. The volume in Indicator Sound can now be over driven, so it is consistent with the MATE sound preferences. Notification area applets are still supported as a fallback.


MATE Dock Applet

MATE Dock Applet is used in the Mutiny layout, but anyone can add it to a panel to create custom panel arrangements. The new version adds support for BAMF and icon scrolling.

  • MATE Dock Applet no longer uses its own method of matching icons to applications and instead uses BAMF. What this means for users is that from now on the applet will be a lot better at matching applications and windows to their dock icons.
  • Icon scrolling is useful when the dock has limited space on its panel and will prevent it from expanding over other applets. This addresses an issue reported by several users in Ubuntu MATE 17.10.
Brisk Menu

Many users commented that when using the Mutiny layout the "traditional" menu felt out of place. The Solus Project, the maintainers of Brisk Menu, have add a dash-style launcher at our request. Ubuntu MATE 18.04 includes a patched version of Brisk Menu that includes this new dash launcher. When MATE Tweak is used to enable the Mutiny or Cupertino layout, it now switches on the dash launcher which enables a full screen, searchable, application launcher. Similarly, switching to the other panel layouts restores the more traditional Brisk Menu.

Since 18.04 beta 1 we tweaked the style of the session control buttons in Brisk Menu and those updates will be wait for you are you install Ubuntu MATE 18.04 Beta 2.

MATE Window Applets

The Mutiny and Netbook layouts now integrate the mate-window-applets. You can see these in action alongside an updated Mutiny layout here:

Mutiny undecorated maximised windows Minimal Installation

If you follow the Ubuntu news closely you may have heard that 18.04 now has a Minimal Install option. Ubuntu MATE was at the front of the queue to take advantage of this new feature.


The Minimal Install is a new option presented in the installer that will install just the MATE Desktop, its utilities, its themes and Firefox. All the other applications such as office suite, email client, video player, audio manager, etc. are not installed. If you're interested, here is the complete list of software that will not be present on a minimal install of Ubuntu MATE 18.04

So, who's this aimed at? There are users who like to uninstall the software they do not need or want and build out their own desktop experience. So for those users, a minimal install is a great platform to build on. For those of you interested in creating "kiosk" style devices, such as home brew Steam machines or Kodi boxes, then a minimal install is another useful starting point.

MATE Tweak

MATE Tweak can now toggle the HiDPI mode between auto detection, regular scaling and forced scaling. HiDPI mode changes are dynamically applied. MATE Tweak has a deeper understanding of Brisk Menu and Global Menu capabilities and manages them transparently while switching layouts. Switching layouts is far more reliable now too. We've removed the Interface section from MATE Tweak. Sadly all the features the Interface section tweaked have been dropped from GTK3 so are now redundant.


We've added the following changes since 18.04 Beta 1

  • Added support for the modifications to the Netbook layout.
  • Added a button to launch the Font preferences so users with HiDPI displays can fine tune their font DPI.
  • When saving a panel layout the Dock status will be saved too.
Caja

We've landed caja-eiciel and caja-seahorse.

  • caja-eiciel - An extension for Caja to edit access control lists (ACLs) and extended attributes (xattr)
  • caja-seahorse - An extension for Caja which allows encryption and decryption of OpenPGP files using GnuPG
Artwork, Fonts & Emoji

We are no longer shipping mate-backgrounds by default. They have served us well, but are looking a little stale now. We have created a new selection of high quality wallpapers comprised of some abstract designs and high resolution photos from unsplash.com. The Ubuntu MATE Plymouth theme (boot logo) is now HiDPI aware. Our friends at Ubuntu Budgie have uploaded a new version of Slick Greeter which now fades in smoothly, rather than the stuttering we saw in Ubuntu MATE 17.10. We've switched to Noto Sans for users of Japanese, Chinese and Korean fonts and glyphs. MATE Desktop 1.20 supports emoji input, so we've added a colour emoji font too.

New since 18.04 beta 1 the xcursor themes have been replaced with new cursors from MATE upstream, that also offer HiDPI support.

Raspberry Pi images

We're planning on releasing Ubuntu MATE images for the Raspberry Pi around the time 18.04.1 is released, which should be sometime in July. It takes about a month to get the Raspberry Pi images built and tested and we simply don't have time to do this in time for the April release of 18.04.

Download Ubuntu MATE 18.04 Beta 2

We've even redesigned the download page so it's even easier to get started.

Download Known Issues

Here are the known issues.

Ubuntu MATE
  • The Desktop Layout button in UBuntu MATE Welcome is extremely unreliable.
    • It is best to pretend you have seen that button and avoid clicking it. It will break your desktop, I promise.
  • Anyone upgrading from Ubuntu MATE 16.04 or newer may need to use MATE Tweak to reset the panel layout to one of the bundled layouts post upgrade.
    • Migrating panel layouts, particularly those without Indicator support, is hit and miss. Mostly miss.
Ubuntu family issues

This is our known list of bugs that affects all flavours.

You'll also want to check the Ubuntu MATE bug tracker to see what has already been reported. These issues will be addressed in due course.

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

Kubuntu General News: Kubuntu Bionic Beaver (18.04 LTS) Beta 2 Released!

Pre, 06/04/2018 - 8:13pd

The second beta of the Bionic Beaver (to become 18.04 LTS) has now been released, and is available for download!

This milestone features images for Kubuntu and other Ubuntu flavours.

Pre-releases of the Bionic Beaver are not encouraged for:

  • Anyone needing a stable system
  • Anyone who is not comfortable running into occasional, even frequent breakage.

They are, however, recommended for:

  • Ubuntu flavour developers
  • Those who want to help in testing, reporting, and fixing bugs as we work towards getting this release ready.

Beta 2 includes some software updates that are ready for broader testing. However, it is quite an early set of images, so you should expect some bugs.

You can:

 

Ubuntu Studio: 18.04 Beta Release

Pre, 06/04/2018 - 8:04pd
Ubuntu Studio 18.04 Bionic Beaver Beta is released! The beta of the upcoming release of Ubuntu Studio 18.04 is ready for testing. You may find the images at cdimage.ubuntu.com/ubuntustudio/releases/bionic/beta-2/. More information can be found in the Beta Release Notes. Reporting Bugs If you find any bugs with this release, please report them, and take your […]

Xubuntu: Xubuntu 18.04 Community Wallpaper Contest Winners!

Pre, 06/04/2018 - 12:36pd

The Xubuntu team are happy to announce the results of the 18.04 community wallpaper contest!

We want to send out a huge thanks to every contestant; last time we had 92 submissions but now you all made us work much harder in picking the best ones out with a total of 162 submissions! Great work! All of the submissions are browsable on the 18.04 contest page at contest.xubuntu.org.

Without further ado, here are the winners:

Note that the images listed above are resized for the website. For the full size images (up to 4K resolution!), make sure you have the package xubuntu-community-wallpapers installed. The package is installed by default in all new Xubuntu 18.04 installations.

Sebastian Dröge: Improving GStreamer performance on a high number of network streams by sharing threads between elements with Rust’s tokio crate

Enj, 05/04/2018 - 5:21md

For one of our customers at Centricular we were working on a quite interesting project. Their use-case was basically to receive an as-high-as-possible number of audio RTP streams over UDP, transcode them, and then send them out via UDP again. Due to how GStreamer usually works, they were running into some performance issues.

This blog post will describe the first set of improvements that were implemented for this use-case, together with a minimal benchmark and the results. My colleague Mathieu will follow up with one or two other blog posts with the other improvements and a more full-featured benchmark.

The short version is that CPU usage decreased by about 65-75%, i.e. allowing 3-4x more streams with the same CPU usage. Also parallelization works better and usage of different CPU cores is more controllable, allowing for better scalability. And a fixed, but configurable number of threads is used, which is independent of the number of streams.

The code for this blog post can be found here.

Table of Contents
  1. GStreamer & Threads
  2. Thread-Sharing GStreamer Elements
  3. Available Elements
  4. Little Benchmark
  5. Conclusion
GStreamer & Threads

In GStreamer, by default each source is running from its own OS thread. Additionally, for receiving/sending RTP, there will be another thread in the RTP jitterbuffer, yet another thread for receiving RTCP (another source) and a last thread for sending RTCP at the right times. And RTCP has to be received and sent for the receiver and sender side part of the pipeline, so the number of threads doubles. In the sum this gives at least 1 + 1 + (1 + 1) * 2 = 6 threads per RTP stream in this scenario. In a normal audio scenario, there will be one packet received/sent e.g. every 20ms on each stream, and every now and then an RTCP packet. So most of the time all these threads are only waiting.

Apart from the obvious waste of OS resources (1000 streams would be 6000 threads), this also brings down performance as all the time threads are being woken up. This means that context switches have to happen basically all the time.

To solve this we implemented a mechanism to share threads, and in the end as a result we have a fixed, but configurable number of threads that is independent from the number of streams. And can run e.g. 500 streams just fine on a single thread with a single core, which was completely impossible before. In addition we also did some work to reduce the number of allocations for each packet, so that after startup no additional allocations happen per packet anymore for buffers. See Mathieu’s upcoming blog post for details.

In this blog post, I’m going to write about a generic mechanism for sources, queues and similar elements to share their threads between each other. For the RTP related bits (RTP jitterbuffer and RTCP timer) this was not used due to reuse of existing C codebases.

Thread-Sharing GStreamer Elements

The code in question can be found here, a small benchmark is in the examples directory and it is going to be used for the results later. A full-featured benchmark will come in Mathieu’s blog post.

This is a new GStreamer plugin, written in Rust and around the Tokio crate for asynchronous IO and generally a “task scheduler”.

While this could certainly also have been written in C around something like libuv, doing this kind of work in Rust is simply more productive and fun due to its safety guarantees and the strong type system, which definitely reduced the amount of debugging a lot. And in addition “modern” language features like closures, which make working with futures much more ergonomic.

When using these elements it is important to have full control over the pipeline and its elements, and the dataflow inside the pipeline has to be carefully considered to properly configure how to share threads. For example the following two restrictions should be kept in mind all the time:

  1. Downstream of such an element, the streaming thread must never ever block for considerable amounts of time. Otherwise all other elements inside the same thread-group would be blocked too, even if they could do any work now
  2. This generally all works better in live pipelines, where media is produced in real-time and not as fast as possible
Available Elements

So this repository currently contains the generic infrastructure (see the src/iocontext.rs source file) and a couple of elements:

  • an UDP source: ts-udpsrc, a replacement for udpsrc
  • an app source: ts-appsrc, a replacement for appsrc to inject packets into the pipeline from the application
  • a queue: ts-queue, a replacement for queue that is useful for adding buffering to a pipeline part. The upstream side of the queue will block if not called from another thread-sharing element, but if called from another thread-sharing element it will pause the current task asynchronously. That is, stop the upstream task from producing more data.
  • a proxysink/src element: ts-proxysrc, ts-proxysink, replacements for proxysink/proxysrc for connecting two pipelines with each other. This basically works like the queue, but split into two elements.
  • a tone generator source around spandsp: ts-tonesrc, a replacement for tonegeneratesrc. This also contains some minimal FFI bindings for that part of the spandsp C library.

All these elements have more or less the same API as their non-thread-sharing counterparts.

API-wise, each of these elements has a set of properties for controlling how it is sharing threads with other elements, and with which elements:

  • context: A string that defines in which group this element is. All elements with the same context are running on the same thread or group of threads,
  • context-threads: Number of threads to use in this context. -1 means exactly one thread, 1 and above used N+1 threads (1 thread for polling fds, N worker threads) and 0 sets N to the number of available CPU cores. As long as no considerable work is done in these threads, -1 has shown to be the most efficient. See also this tokio GitHub issue
  • context-wait: Number of milliseconds that the threads will wait on each iteration. This allows to reduce CPU usage even further by handling all events/packets that arrived during that timespan to be handled all at once instead of waking up the thread every time a little event happens, thus reducing context switches again

The elements are all pushing data downstream from a tokio thread whenever data is available, assuming that downstream does not block. If downstream is another thread-sharing element and it would have to block (e.g. a full queue), it instead returns a new future to upstream so that upstream can asynchronously wait on that future before producing more output. By this, back-pressure is implemented between different GStreamer elements without ever blocking any of the tokio threads. All this is implemented around the normal GStreamer data-flow mechanisms, there is no “tokio fast-path” between elements.

Little Benchmark

As mentioned above, there’s a small benchmark application in the examples directory. This basically sets up a configurable number of streams and directly connects them to a fakesink, throwing away all packets. Additionally there is another thread that is sending all these packets. As such, this is really the most basic benchmark and not very realistic but nonetheless it shows the same performance improvement as the real application. Again, see Mathieu’s upcoming blog post for a more realistic and complete benchmark.

When running it, make sure that your user can create enough fds. The benchmark will just abort if not enough fds can be allocated. You can control this with ulimit -n SOME_NUMBER, and allowing a couple of thousands is generally a good idea. The benchmarks below were running with 10000.

After running cargo build –release to build the plugin itself, you can run the benchmark with:

cargo run --release --example udpsrc-benchmark -- 1000 ts-udpsrc -1 1 20

and in another shell the UDP sender with

cargo run --release --example udpsrc-benchmark-sender -- 1000

This runs 1000 streams, uses ts-udpsrc (alternative would be udpsrc), configures exactly one thread -1, 1 context, and a wait time of 20ms. See above for what these settings mean. You can check CPU usage with e.g. top. Testing was done on an Intel i7-4790K, with Rust 1.25 and GStreamer 1.14. One packet is sent every 20ms for each stream.

Source Streams Threads Contexts Wait CPU udpsrc 1000 1000 x x 44% ts-udpsrc 1000 -1 1 0 18% ts-udpsrc 1000 -1 1 20 13% ts-udpsrc 1000 -1 2 20 15% ts-udpsrc 1000 2 1 20 16% ts-udpsrc 1000 2 2 20 27% Source Streams Threads Contexts Wait CPU udpsrc 2000 2000 x x 95% ts-udpsrc 2000 -1 1 20 29% ts-udpsrc 2000 -1 2 20 31% Source Streams Threads Contexts Wait CPU ts-udpsrc 3000 -1 1 20 36% ts-udpsrc 3000 -1 2 20 47%

Results for 3000 streams for the old udpsrc are not included as starting up that many threads needs too long.

The best configuration is apparently a single thread per context (see this tokio GitHub issue) and waiting 20ms for every iterations. Compared to the old udpsrc, CPU usage is about one third in that setting, and generally it seems to parallelize well. It’s not clear to me why the last test has 11% more CPU with two contexts, while in every other test the number of contexts does not really make a difference, and also not for that many streams in the real test-case.

The waiting does not reduce CPU usage a lot in this benchmark, but on the real test-case it does. The reason is most likely that this benchmark basically sends all packets at once, then waits for the remaining time, then sends the next packets.

Take these numbers with caution, the real test-case in Mathieu’s blog post will show the improvements in the bigger picture, where it was generally a quarter of CPU usage and almost perfect parallelization when increasing the number of contexts.

Conclusion

Generally this was a fun exercise and we’re quite happy with the results, especially the real results. It took me some time to understand how tokio works internally so that I can implement all kinds of customizations on top of it, but for normal usage of tokio that should not be required and the overall design makes a lot of sense to me, as well as the way how futures are implemented in Rust. It requires some learning and understanding how exactly the API can be used and behaves, but once that point is reached it seems like a very productive and performant solution for asynchronous IO. And modelling asynchronous IO problems based on the Rust-style futures seems a nice and intuitive fit.

The performance measurements also showed that GStreamer’s default usage of threads is not always optimal, and a model like in upipe or pipewire (or rather SPA) can provide better performance. But as this also shows, it is possible to implement something like this on top of GStreamer and for the common case, using threads like in GStreamer reduces the cognitive load on the developer a lot.

For a future version of GStreamer, I don’t think we should make the threading “manual” like in these two other projects, but instead provide some API additions that make it nicer to implement thread-sharing elements and to add ways in the GStreamer core to make streaming threads non-blocking. All this can be implemented already, but it could be nicer.

All this “only” improved the number of threads, and thus the threading and context switching overhead. Many other optimizations in other areas are still possible on top of this, for example optimizing receive performance and reducing the number of memory copies inside the pipeline even further. If that’s something you would be interested in, feel free to get in touch.

And with that: Read Mathieu’s upcoming blog posts about the other parts, RTP jitterbuffer / RTCP timer thread sharing, and no allocations, and the full benchmark.

Dustin Kirkland: I'm Joining the Google Cloud Team!

Enj, 05/04/2018 - 1:48pd

A couple of months ago, I reflected on "10 Amazing Years of Ubuntu and Canonical".  Indeed, it has been one hell of a ride, and that post is merely the tip of the proverbial iceberg...

The people I've met, the things I've learned, the places I've been, the users I've helped, the partners I've enabled, the customers I've served -- these are undoubtedly the most amazing and cherished experiences of my professional career to date.

And for the first time in my life, I can fully and completely grok the Ubuntu philosophy:
I am who I am, because of who we all areWith all my heart, I love what we've created in Ubuntu, I love the products that we've built at Canonical, I love each and every person involved.

So, it is with mixed emotion that the Canonical chapter of my life comes to a close and a new one begins...

Next week, I have the honor and privilege to join the Google Cloud product management team, and work beside so, so, so, so many people who continue to inspire me.

Words fail to express how excited I am about this opportunity!  In this new role, I will be working closely with Aparna Sinha, Michael Rubin, and Tim Hockin, and I hope to see many of you at KubeCon Europe in Copenhagen later this month.

My friends and family will be happy to know that we're staying here in Austin, and I'll be working from the Google Austin office with my new team, largely based in Sunnyvale, California.

The Ubuntu community can expect to see me remaining active in the Ubuntu developer community as a Core Dev and a MOTU, and I will continue to maintain many of the dozens of open source projects and packages that so many of you have come to rely upon.  Perhaps I'll even become more active upstream in Debian, if the Debian community will have me there too :-)

Finally, an enormous THANK YOU to everyone who has made this journey through Ubuntu and Canonical such a warm, rewarding, emotional, exceptional experience!

Cheers,
@DustinKirkland

Raphaël Hertzog: My Free Software Activities in March 2018

Mër, 04/04/2018 - 12:22md

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

I reviewed and merged 14 merge requests from multiple contributors:

On top of this, I updated the Salsa/AliothMigration wiki page with information about how to best leverage tracker.debian.org when you migrate to salsa.

I also filed a few issues for bugs or things that I’d like to see improved:

I also gave my feedback about multiple mockups prepared by Chirath R in preparation of his Google Summer of Code project proposal.

Security Tools Packaging Team

Following the departure of alioth, the new list that we requested on lists.debian.org has been created: https://lists.debian.org/debian-security-tools/

I updated (in the git repositories) all the Vcs-* and all the Maintainer fields of the packages maintained by the team.

I prepared and uploaded afflib 3.7.16-3 to fix RC bug #892599. I sponsored rhash 1.3.6 for Aleksey Kravchenko, ccrypt 1.10-5 for Alexander Kulak and ledger-wallets-udev 0.1 for Stephne Neveu.

Debian Live

This project also saw an unexpected resurgence of activity and I had to review and merge many merge requests:

It’s nice to see two derivatives being so active in upstreaming their changes.

Misc stuff

Hamster time tracker. I was regularly hit a by a bug leading to a gnome-shell crash (leading to a graphical session crash due to the design of wayland) and this time I decided that enough was enough so I started to dig in the code and did my best to fix the issues I encountered. During the month, I tested multiple versions and submitted three pull requests. Right now, the version in git is working fine for me. Still, it really smells of a bad design that mistakes in shell extensions can have such dramatic consequences.

Packaging. I forwarded #892063 to upstream in a new ticket. I updated zim to version 0.68 (final release replacing release candidate that I had already packaged). I filed #893083 suggesting that the hello source package should be a model for other packages and as such it should have a git repository hosted on salsa.debian.org.

Sponsorship. I sponsored pylint-django 0.9.4-1 for Joseph Herlant. I also sponsored urwid 2.0.1-1 (new upstream version), xlwt 1.3.0-1 (new version with python 3 support), elastalert 0.1.29-1 (new upstream release and RC bug fix) which have been updated for Freexian customers.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Faqet