You are here

Agreguesi i feed

David Tomaschik: The IoT Hacker's Toolkit

Planet Ubuntu - Hën, 16/04/2018 - 9:00md

Today, I’m giving a talk entitled “The IoT Hacker’s Toolkit” at BSides San Francisco. I thought I’d release a companion blog post to go along with the slide deck. I’ll also include a link to the video once it gets posted online.

Introduction

From my talk synopysis:

IoT and embedded devices provide new challenges to security engineers hoping to understand and evaluate the attack surface these devices add. From new interfaces to uncommon operating systems and software, the devices require both skills and tools just a little outside the normal security assessment. I’ll show both the hardware and software tools, where they overlap and what capabilities each tool brings to the table. I’ll also talk about building the skillset and getting the hands-on experience with the tools necessary to perform embedded security assessments.

While some IoT devices can be evaluated from a purely software standpoint (perhaps reverse engineering the mobile application is sufficient for your needs), a lot more can be learned about the device by interacting with all the interfaces available (often including ones not intended for access, such as debug and internal interfaces).

Background

I’ve always had a fascination with both hacking and electronics. I became a radio amateur at age 11, and in college, since my school had no concentration in computer security, I selected an embedded systems concentration. As a hacker, I’ve viewed the growing population of IoT devices with fascination. These devices introduce a variety of new challenges to hackers, including the security engineers tasked with evaluating these devices for security flaws:

  • Unfamiliar architectures (mostly ARM and MIPS)
  • Unusual interfaces (802.15.4, Bluetooth LE, etc.)
  • Minimal software (stripped C programs are common)

Of course, these challenges also present opportunities for hackers (white-hat and black-hat alike) who understand the systems. While finding a memory corruption vulnerability in an enterprise web application is all but unheard of, on an IoT device, it’s not uncommon for web requests to be parsed and served using basic C, with all the memory management issues that entails. In 2016, I found memory corruption vulnerabilities in a popular IP phone.

Think Capabilities, Not Toys

A lot of hackers, myself included, are “gadget guys” (or “gadget girls”). It’s hard not to look at every possible tool as something new to add to the toolbox, but at the end of the day, one has to consider how the tool adds new capabilities. It needn’t be a completely distinct capability, perhaps it offers improved speed or stability.

Of course, this is a “do as I say, not as I do” area. I, in fact, have quite a number of devices with overlapping capabilities. I’d love to claim this was just to compare devices for the benfit of those attending my presentation or reading this post, but honestly, I do love my technical toys.

Software

Much of the software does not differ from that for application security or penetration testing. For example, Wireshark is commonly used for network analysis (IP and Bluetooth), and Burp Suite for HTTP/HTTPS.

The website fccid.io is very useful in reconnaissance of devices, providing information about the frequencies and modulations used, as well as often internal pictures of devices, which can also reveal information such as chipsets, overall architecture, etc., all without lifting a screwdriver.

Reverse Engineering

Firmware images are often multiple files concatentated, or contain proprietary metadata headers. Binwalk walks the image, looking for known file signatures, and extracts the components. Often this will include entire Linux filesystems, kernel images, etc.

Once you have extracted this, you might be interested in analyzing the binaries or other software contained inside. Often a disassembler is useful. My current favorite disassembler is Binary Ninja, but there are a number of options:

Basic Tools

There’s a few tools that I consider absolutely essentially to any sort of hardware hacking exercise. These tools are fundamental to gaining an understanding of the device and accessing multiple types of interfaces on the device.

Screwdriver Set

A screwdriver set might be an obvious thing, but you’ll want one with bits that can get into tight places, are appropriately sized to the screws on your device (using the wrong size Phillips screwdriver bit is one of the easiest ways to strip a screw). Many devices also use “security screws”, which seems to be a term applied to just about any screw that doesn’t come in your standard household tool kit. (I’ve seen Torx, triangle bits, square bits, Torx with a center pin, etc.)

I have a wonderful driver kit from iFixit, and I’ve found almost nothing that it won’t open. The extension driver helps get into smaller spaces, and the 64 bits cover just about everything. I personally like to support iFixit because they have great write-ups and tear downs, but there are also cheaper clones of this toolkit.

Openers

Many devices are sealed with plastic catches or pieces that are press-fit together. For these, you’ll need some kind of opener (sometimes called a “spudger”) to pry them apart. I find a variety of shapes useful. You can get this a as part of a combined tool kit from iFixit, iFixit clones, or openers by themselves. I have found the iFixit model to be of slightly higher quality, but I also carry a cheap clone for occassional travel use.

The very thin metal one with a plastic handle is probably my favorite opener – it fits into the thinnest openings, but consequently it also bends fairly easily. I’ve been through a few due to bending damage. Be careful how you use these tools, and make sure your hand is not where they will go if they slip! They are not quite razor-blade sharp, but they will cut your hand with a bit of force behind them.

Multimeter

I get it, you’re looking to hack the device, not rewire your car. That being said, for a lot of tasks, a halfway decent multimeter is somewhere between an absolute requirement and a massive time saver. Some of the tasks a multimeter will help with include:

  • Identifying unknown pinouts
  • Find the ground pin for a UART
  • Checking which components are connected
  • Figuring out what kind of power supply you need
  • Checking the voltage on an interface to make sure you don’t blow something up

I have several multimeters (more than one is important for electronics work), but you can get by with a single one for your IoT hacking projects. The UNI-T UT-61E is a popular model at a good price/performance ratio, but its safety ratings are a little optimistic. The EEVBlog BM235 is my favorite of my meters, but a little higher end (aka expensive). If you’re buying for work, the Fluke 87V is the “golden standard” of multimeters.

If you buy a cheap meter, it will probably work for IoT projects, but there are many multimeters that are unsafe out there. Please do not use these cheap meters on “mains” electricity, high voltage power supplies, anything coming out of the wall, etc. Your personal safety is not worth saving $40.

Soldering Iron

You will find a lot of unpopulated headers (just the holes in the circuit board) on production IoT devices. The headers for various debug interfaces are left out, either as a cost savings, or for space reasons, or perhaps both. The headers were used during the development process, but often the manufacturer wants to leave the connections either to avoid redoing the printed circuit board (PCB) layout, or to be able to debug failures in the field.

In order to connect to these unpopulated headers, you will want to solder your own headers in their place. To do so, you’ll need a soldering iron. To minimize the risk of damaging the board in the process, use a soldering iron with a variable temperature and a small tip. The Hakko FX-888D is very popular and a very nice option, but you can still do good work with something like this Aoyue or other options. Just don’t use a soldering iron designed for a plumber or similiar uses – you’ll just end up burning the board.

Likewise, you’ll want to practice your soldering skills before you start work on your target board – find some small soldering projects to practice on, or some through away scrap electronics to work on.

Network Interfaces

Obviously, these devices have network interfaces. After all, they are the “Internet of Things”, so a network connection would seem to be a requirement. Nearly universally, 802.11 connectivity is present (sometimes on just a base station), and ethernet (10/100 or Gigabit) interfaces are also very common.

Wired Network Sniffing

The easiest way to sniff a wired network is often a 2nd interface on your computer. I’m a huge fan of this USB 3.0 to Dual Gigabit Adapter, which even has a USB-C version for those using one of the newer laptops or Macbooks that only support USB-C. Either option gives you two network ports to work with, even on laptops without built-in wired interfaces.

Beyond this, you’ll need software for the sniffing. Wireshark is an obvious tool for raw packet capture, but you’ll often also want HTTP/HTTPS sniffing, for which Burp Suite is the defacto standard, but mitmproxy is an up-and-coming contender with a lot of nice features.

Wireless Network Sniffing

Most common wireless network interfaces on laptops can perform monitor mode, but perhaps you’d like to connect your wireless to use the internet, as well as sniff on another interface. Alfa wireless cards like the AWUSO36NH and the AWUSO36ACH have been quite popular for a while, but I personally like using the tiny RT5370-based adapters for assessments not requiring long range due to its compact size and portability.

Wired (Debug/Internal) Interfaces

There are many subtle interfaces on IoT devices, intended for either debug use, or for various components to communicate with each other. For example:

  • SPI/I2C for flash chips
  • SPI/SD for wifi chips
  • UART for serial consoles
  • UART for bluetooth/wifi controllers
  • JTAG/SWD for debugging processors
  • ICSP for In-Circuit Programming
UART

Though there are many universal devices that can do other things, I run into UARTs so often that I like having a standalone adapter for this. Additionally, having a standalone adapter allows me to maintain a UART connection at the same time as I’m working with JTAG/SWD or other interfaces.

You can get a standalone cable for around $10, that can be used for most UART interfaces. (On most devices I’ve seen, the UART interface is 3.3v, and these cables work well for that.) Most of these cables have the following pinout, but make sure you check your own:

  • Red: +5V (Don’t connect on most boards)
  • Black: GND
  • Green: TX from Computer, RX from Device
  • White: RX from Computer, TX from Device

There are also a number of breakouts for the FT232RL or the CH340 chips for UART to USB. These provide a row of headers to connect jumpers between your target device and the adapter. I prefer the simplicity of the cables (and fewer jumper ends to come loose during my testing), but this is further evidence that there are a number of options to provide the same capabilities.

Universal Interfaces (JTAG/SWD/I2C/SPI)

There are a number of interface boards referred to as “universal interfaces” that have the capability to interface with a wide variety of protocols. These largely fit into two categories:

  • Bit-banging microcontrollers
  • Hardware interfaces (dominated by the FT*232 series from FTDI)

There are a number of options for implementing a bit-banging solution for speaking these protocols, ranging from software projects to run on an Arduino, to projects like the Bus Pirate, which uses a PIC microcontroller. These generally present a serial interface (UART) to the host computer and applications, and use in-band signalling for configuration and settings. There may be some timing issues on certain devices, as microcontrollers often cannot update multiple output pins in the same clock cycle.

Hardware interfaces expose a dedicated USB endpoint to talk to the device, and though this can be configured, it is done via USB endpoints and registers. The protocols are implemented in semi-dedicated hardware. In my experience, these devices are both faster and more reliable than bit-banging microcontrollers, but you are limited to whatever protocols are supported by the particular device, or the capabilities of the software to drive them. (For example, the FT*232H series can do most protocols via bit-banging, but it updates an entire register at a time, and has high enough speed to run the clock rate of many protocols.)

The FT2232H and FT232H (not to be confused with the FT232RL, which is UART only), in particular, has been incorporated into a number of different breakout boards that are excellent universal interfaces:

Logic Analyzer

When you have an unknown protocol, unknown pinout, or unknown protocol settings (baudrate, polarity, parity, etc.), a logic analyzer can dramtically help by allowing you a direct look at the signals being passed between chips or interfaces.

I have a Saleae Logic 8, which is a great value logic analyzer. It has a compact size and their software is really excellent and easy to use. I’ve used it to discover the pinout for many unlabeled ports, discover the settings for UARTs, and just generally snoop on traffic between two chips on a board.

Though there are cheap knock-offs available on eBay or AliExpress, I have tried them and they have very poor quality, and unfortunately the open-source sigrok software is not quite the quality of the Saleae software. Additionally, they rarely have any input protection to prevent you from blowing up the device yourself.

Wireless

Obviously, the Internet of Things has quite a number of wireless devices. Some of these devices use WiFI (discussed above) but many use other wireless protocols. Bluetooth (particularly Bluetooth LE) is quite common, but in other areas, such as home automation, other protocols prevail. Many of these are based on 802.15.4 (Zigbee, Z-Wave) or proprietary protocols in the 433 MHz, 915 MHz, or 2.4 GHz ISM bands.

Bluetooth

Bluetooth devices are incredibly common, and Bluetooth Low Energy (starting with Bluetooth 4.0) is very popular for IoT devices. Most devices that do not stream audio, provide IP connectivity, or have other high-bandwidth needs seem to be moving to Bluetooth Low Energy, probably because of several reasons:

  1. Lower power consumption (battery friendly)
  2. Cheaper chipsets
  3. Less complex implementation

There is essentially only one tool I can really recommend for assessing Bluetooth, and that is the Ubertooth One (Amazon). This can follow and capture Bluetooth communications, providing output in pcap or pcap-ng format, allowing you to import the communications into Wireshark for later analysis. (You can also use other pcap-based tools like scapy for analysis of the resulting pcaps.) The Ubertooth tools are available in Debian, Ubuntu, or Kali as packages, but you can get a more up to date version of the software from their Github repository.

Adafruit also offers a BLE Sniffer which works only for Bluetooth Low Energy and utilizes a Nordic Semiconductor BLE chip with a special firmware for sniffing. The software for this works well on Windows, but not so well on Linux where it is a python script that tends to be more difficult to use than the Ubertooth tools.

Software Defined Radio

For custom protocols, or to enable lower-level evaluation or attacks of radio-based systems, Software Defined Radio presents an excellent opportunity for direct interaction with the RF side of the IoT device. This can range from only receiving (for purposes of understanding and reverse engineering the device) to being able to simultaneously receive and transmit (full-duplex) depending upon the needs of your assessment.

For simply receiving, there are simple DVB-T dongles that have been repurposed as general-purpose SDRs, often referred to as “RTL SDRs”, a name based on the Realtek RTL2832U chips present in the device. These can be used because the chip is capable of providing the raw samples to the host operating system, and because of their low cost, a large open source community has emerged. Companies like NooElec are now even offering custom built hardware based on these chips for the SDR community. There’s also a kit that expands the receive range of the RTL-SDR dongles.

In order to transmit as well, the hardware is significantly more complex, and most options in this space are driven by an FPGA or other powerful processor. Even a few years ago, the capabilities here were very expensive with tools like the USRP. However, the HackRF by Great Scott Gadgets and the BladeRF by Nuand have offered a great deal of capability for a hacker-friendly price.

I personally have a BladeRF, but I honestly wish I had bought a HackRF instead. The HackRF has a wider usable frequency range (especially at the low end), while the BladeRF requires a relatively expensive upconverter to cover those bands. The HackRF also seems to have a much more active community and better support in some areas of open source software.

Other Useful Tools

It is occasionally useful to use an oscilloscope to see RF signals or signal integrity, but I have almost never found this necessary.

Specialized JTAG programmers for specific hardware often work better, but cost quite a bit more and are specialized to those specific items.

For dumping Flash chips, Xeltec programmers/dumpers are considered the “top of the line” and do an incredible job, but are at a price point such that only labs doing this on a regular basis find it worthwhile.

Slides

PDF: The IoT Hacker’s Toolkit

Lubuntu Blog: This Week in Lubuntu Development #3

Planet Ubuntu - Hën, 16/04/2018 - 6:45md
Here is the third issue of This Week in Lubuntu Development. You can read last week's issue here. Changes General Some work was done on the Lubuntu Manual by Lubuntu contributor Lyn Perrine. Here's what she has been working on: Start page for Evince. Start docs for the Document Viewer. Start work on the GNOME […]

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, March 2018

Planet Ubuntu - Hën, 16/04/2018 - 4:07md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 214 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 26. Thanks to a few extra hours dispatched this month (accumulated backlog of a contributor), the number of open issues came back to a more usual value.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Elizabeth K. Joseph: SCaLE16x with Ubuntu, CI/CD and more!

Planet Ubuntu - Pre, 13/04/2018 - 10:49md

Last month I made my way down to Pasadena for one of my favorite conferences of the year, the Southern California Linux Expo. Like most years, I split my time between Ubuntu and stuff I was working on for my day job. This year that meant doing two talks and attending UbuCon on Thursday and half of Friday.

As with past years, UbuCon at SCALE was hosted by Nathan Haines and Richard Gaskin. The schedule this year was very reflective about the history and changes in the project. In a talk from Sriram Ramkrishna of System76 titled “Unity Dumped Us! The Emotional Healing” he talked about the closing of development on the Unity desktop environment. System76 is primarily a desktop company, so the abrupt change of direction from Canonical took some adjusting to and was a little painful. But out of it came their Ubuntu derivative Pop!_OS and a community around it that they’re quite proud of. In the talk “The Changing Face of Ubuntu” Nathan Haines walked through Ubuntu history to demonstrate the changes that have happened within the project over the years, and allow us to look at the changes today with some historical perspective. The Ubuntu project has always been about change. Jono Bacon was in the final talk slot of the event to give a community management talk titled “Ubuntu: Lessons Learned”. Another retrospective, he drew from his experience when he was the Ubuntu Community Manager to share some insight into what worked and what didn’t in the community. Particularly noteworthy for me were his points about community members needing direction more than options (something I’ve also seen in my work, discrete tasks have a higher chance of being taken than broad contribution requests) and the importance of setting expectations for community members. Indeed, I’ve seen that expectations are frequently poorly communicated in communities where there is a company controlling direction of the project. A lot of frustration could be alleviated by being more clear about what is expected from the company and where the community plays a role.


UbuCon group photo courtesy of Nathan Haines (source)

The UbuCon this year wasn’t as big as those in years past, but we did pack the room with nearly 120 people for a few talks, including the one I did on “Keeping Your Ubuntu Systems Secure”. Nathan Haines suggested this topic when I was struggling to come up with a talk idea for the conference. At first I wasn’t sure what I’d say, but as I started taking notes about what I know about Ubuntu both from a systems administration perspective with servers, and as someone who has done a fair amount of user support in the community over the past decade, it turned out that I did have an entire talk worth of advice! None of what I shared was complicated or revolutionary, there was no kernel hardening in my talk or much use of third party security tools. Instead the talk focused on things like keeping your system updated, developing a fundamental understanding of how your system and Debian packages work, and tips around software management. The slides for my presentation are pretty wordy, so you can glean the tips I shared from them: Keeping_Your_Ubuntu_Systems_Secure-UbuConSummit_Scale16x.pdf.


Thanks to Nathan Haines for taking this photo during my talk (source)

The team running Ubuntu efforts at the conference rounded of SCALE by staffing a booth through the weekend. The Ubuntu booths have certainly evolved over the years, when I ran them it was always a bit cluttered and had quite the grass roots feeling to it (the booth in 2012). The booths the team put together now are simpler and more polished. This is definitely in line with the trend of more polished open source software presence in general, so kudos to the team for making sure our little Ubuntu California crew of volunteers keeps up.

Shifting over to the more work-focused parts of the conference, on Friday I spoke at Container Day, with my talk being the first of the day. The great thing about going first is that I get to complete my talk and relax for the rest of the conference. The less great thing about it is that I get to experience all the A/V gotchas and be awake and ready to give a talk at 9:30AM. Still, I think the pros outweighed the cons and I was able to give a refresh of my “Advanced Continuous Delivery Strategies for Containerized Applications Using DC/OS” talk, which included a new demo that I finished writing the week before. The talk seemed to generate interest that led to good discussions later in the conference, and to my relief the live demo concluded without a problem. Slides from the talk can be found here: Advanced_CD_Using_DCOS-SCALE16x.pdf


Thanks to Nathan Handler for taking this photo during my talk (source)

Saturday and Sunday brought a duo of keynotes that I wouldn’t have expected at an open source conference five years ago, from Microsoft and Amazon. In both these keynotes the speaker recognized the importance of open source today in the industry, which has fueled the shift in perspective and direction regarding open source for these companies. There’s certainly a celebration to be had around this, when companies are contributing to open source because it makes business sense to do so, we all benefit from the increased opportunities that presents. On the other hand, it has caused disruption in the older open source communities, and some have struggled to continue to find personal value and meaning in this new open source world. I’ve been thinking a lot about this since the conference and have started putting together a talk about it, nicely timed for the 20th anniversary of the “open source” term. I want to explore how veteran contributors stay passionate and engaged, and how we can bring this same feeling to new contributors who came down different paths to join open source communities.

Regular talks began on Saturday with me attending Nathan Handler’s talk on “Terraforming all the things” where he shared some of the work they’ve been doing at Yelp that has resulted in the handling of things like DNS records and CDN configuration being handled by Terraform. From there I went to a talk by Brian Proffitt where he talked about metrics in communities and the Community Health Analytics Open Source Software (CHAOOS) project. I spent much of the rest of the day in the “hallway track” catching up with people, but at the end I popped into a talk by Steve Wong on “Running Containerized Workloads in an on-prem Datacenter” where he discussed the role that bare metal continues to have in the industry, even as many rush to the cloud for a turnkey solution.

It was at this talk where I had the pleasure of meeting one of our newest Account Executives at Mesosphere, Kelly Bond, and also had some time to catch up with my colleague Jörg Schad.


Jörg, me, Kelly

Nuritzi Sanchez presented my favorite talk on Sunday, on Endless OS. They build a Linux distribution using FlatPak and as an organization work on the problem of access to technology in developing nations. I’ve long been concerned about cellphone-only access in these countries. You need a mix of a system that’s tolerant to being offline and that has input devices (like keyboards!) that allow work to be done on them. They’re doing really interesting work on the technical side related to offline content and general architecture around a system that needs to be conscious of offline status, but they’re also developing deployment strategies on the ground in places like Indonesia that will ensure the local community can succeed long term. I have a lot of respect for the people working toward all this, and really want to see this organization succeed.

I’m always grateful to participate in this conference. It’s grown a lot over the years and it certainly has changed, but the autonomy given to the special events like UbuCon allows for a conference that brings together lots of different voices and perspective all in one place. I also have a lot of friends who attend this conference, many of whom span jobs and open source projects I’ve worked on over more than a decade. Building friendships and reconnecting with people is part of what makes the work I do in open source so important to me, and not just a job for me. Thanks to everyone who continues to make this possible year after year in beautiful Pasadena.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157693153653781

Simon Raffeiner: I went to Fukushima

Planet Ubuntu - Pre, 13/04/2018 - 1:36md

I'm an engineer and interested in all kinds of technology, especially if it is used to build something big. But I'm also fascinated by what happens when things suddenly change and don't go as expected, and especially by everything that's left behind after technological and social revolutions or disasters. In October 2017 I travelled across Japan and decided to visit one of the places where technology had failed in the worst way imaginable: the Fukushima Evacuation Zone.

The post I went to Fukushima appeared first on LIEBERBIBER.

Kees Cook: security things in Linux v4.16

Planet Ubuntu - Pre, 13/04/2018 - 2:04pd

Previously: v4.15

Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting:

KPTI on arm64

Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs.

hardened usercopy whitelisting
While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct.

As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing.

Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720

This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.)

automatic stack-protector

While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense.

After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks.

That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. :)

Edit: added details on ARM register leaks, thanks to Daniel Micay.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Ubuntu Podcast from the UK LoCo: S11E06 – Six Feet Over It - Ubuntu Podcast

Planet Ubuntu - Enj, 12/04/2018 - 4:15md

This week we review the Dell XPS 13 (9370) Developer Edition laptop, bring you some command line lurve and go over all your feedback.

It’s Season 11 Episode 06 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Lessons from OpenStack Telemetry: Incubation

Planet Debian - Enj, 12/04/2018 - 2:50md

It was mostly around that time in 2012 that I and a couple of fellow open-source enthusiasts started working on Ceilometer, the first piece of software from the OpenStack Telemetry project. Six years have passed since then. I've been thinking about this blog post for several months (even years, maybe), but lacked the time and the hindsight needed to lay out my thoughts properly. In a series of posts, I would like to share my observations about the Ceilometer development history.

To understand the full picture here, I think it is fair to start with a small retrospective on the project. I'll try to keep it short, and it will be unmistakably biased, even if I'll do my best to stay objective – bear with me.

Incubation

Early 2012, I remember discussing with the first Ceilometer developers the right strategy to solve the problem we were trying to address. The company I worked for wanted to run a public cloud, and billing the resources usage was at the heart of the strategy. The fact that no components in OpenStack were exposing any consumption API was a problem.

We debated about how to implement those metering features in the cloud platform. There were two natural solutions: either achieving some resource accounting report in each OpenStack projects or building a new software on the side, covering for the lack of those functionalities.

At that time there were only less than a dozen of OpenStack projects. Still, the burden of patching every project seemed like an infinite task. Having code reviewed and merged in the most significant projects took several weeks, which, considering our timeline, was a show-stopper. We wanted to go fast.

Pragmatism won, and we started implementing Ceilometer using the features each OpenStack project was offering to help us: very little.

Our first and obvious candidate for usage retrieval was Nova, where Ceilometer aimed to retrieves statistics about virtual machines instances utilization. Nova offered no API to retrieve those data – and still doesn't. Since it was out of the equation to wait several months to have such an API exposed, we took the shortcut of polling directly libvirt, Xen or VMware from Ceilometer.

That's precisely how temporary hacks become historical design. Implementing this design broke the basis of the abstraction layer that Nova aims to offer.

As time passed, several leads were followed to mitigate those trade-offs in better ways. But on each development cycle, getting anything merged in OpenStack became harder and harder. It went from patches long to review, to having a long list of requirements to merge anything. Soon, you'd have to create a blueprint to track your work, write a full specification linked to that blueprint, with that specification being reviewed itself by a bunch of the so-called core developers. The specification had to be a thorough document covering every aspect of the work, from the problem that was trying to be solved, to the technical details of the implementation. Once the specification was approved, which could take an entire cycle (6 months), you'd have to make sure that the Nova team would make your blueprint a priority. To make sure it was, you would have to fly a few thousands of kilometers from home to an OpenStack Summit, and orally argue with developers in a room filled with hundreds of other folks about the urgency of your feature compared to other blueprints.

An OpenStack design session in Hong-Kong, 2013

Even if you passed all of those ordeals, the code you'd send could be rejected, and you'd get back to updating your specification to shed light on some particular points that confused people. Back to square one.

Nobody wanted to play that game. Not in the Telemetry team at least.

So Ceilometer continued to grow, surfing the OpenStack hype curve. More developers were joining the project every cycle – each one with its list of ideas, features or requirements cooked by its in-house product manager.

But many features did not belong in Ceilometer. They should have been in different projects. Ceilometer was the first OpenStack project to pass through the OpenStack Technical Committee incubation process that existed before the rules were relaxed.

This incubation process was uncertain, long, and painful. We had to justify the existence of the project, and many technical choices that have been made. Where we were expecting the committee to challenge us at fundamental decisions, such as breaking abstraction layers, it was mostly nit-picking about Web frameworks or database storage.

Consequences

The rigidity of the process discouraged anyone to start a new project for anything related to telemetry. Therefore, everyone went ahead and started dumping its idea in Ceilometer itself. With more than ten companies interested, the frictions were high, and the project was at some point pulled apart in all directions. This phenomenon was happening to every OpenStack projects anyway.

On the one hand, many contributions brought marvelous pieces of technology to Ceilometer. We implemented several features you still don't find any metering system. Dynamically sharded, automatic horizontally scalable polling? Ceilometer has that for years, whereas you can't have it in, e.g., Prometheus.

On the other hand, there were tons of crappy features. Half-baked code merged because somebody needed to ship something. As the project grew further, some of us developers started to feel that this was getting out of control and could be disastrous. The technical debt was growing as fast as the project was.

Several technical choices made were definitely bad. The architecture was a mess; the messaging bus was easily overloaded, the storage engine was non-performant, etc. People would come to me (as I was the Project Team Leader at that time) and ask why the REST API would need 20 minutes to reply to an autoscaling request. The willingness to solve everything for everyone was killing Ceilometer. It's around that time that I decided to step out of my role of PTL and started working on Gnocchi to, at least, solve one of our biggest challenge: efficient data storage.

Ceilometer was also suffering from the poor quality of many OpenStack projects. As Ceilometer retrieves data from a dozen of other projects, it has to use their interface for data retrieval (API calls, notifications) – or sometimes, palliate for their lack of any interface. Users were complaining about Ceilometer dysfunctioning while the root of the problem was actually on the other side, in the polled project. The polling agent would try to retrieve the list of virtual machines running on Nova, but just listing and retrieving this information required several HTTP requests to Nova. And those basic retrieval requests would overload the Nova API. The API does not offer any genuine interface from where the data could be retrieved in a small number of calls. And it had terrible performances.
From the point of the view of the users, the load was generated by Ceilometer. Therefore, Ceilometer was the problem. We had to imagine new ways of circumventing tons of limitation from our siblings. That was exhausting.

At its peak, during the Juno and Kilo releases (early 2015), the code size of Ceilometer reached 54k lines of code, and the number of committers reached 100 individuals (20 regulars). We had close to zero happy user, operators were hating us, and everybody was wondering what the hell was going in those developer minds.

Nonetheless, despite the impediments, most of us had a great time working on Ceilometer. Nothing's ever perfect. I've learned tons of things during that period, which were actually mostly non-technical. Community management, social interactions, human behavior and politics were at the heart of the adventure, offering a great opportunity for self-improvement.

In the next blog post, I will cover what happened in the years that followed that booming period, up until today. Stay tuned!

Julien Danjou https://julien.danjou.info/ Julien Danjou

Bursary applications for DebConf18 are closing in 48 hours!

Planet Debian - Enj, 12/04/2018 - 12:30md

If you intend to apply for a DebConf18 bursary and have not yet done so, please proceed as soon as possible!

Bursary applications for DebConf18 will be accepted until April 13th at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk or organising an event is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the wiki.

See you in Hsinchu!

Laura Arjona Reina https://bits.debian.org/ Bits from Debian

Ante Karamatić: Spaces – uncomplicating your network

Planet Ubuntu - Enj, 12/04/2018 - 6:44pd
An old OpenStack network architecture

For past 5-6 years I’ve been in business of deploying cloud solutions for our customers. Vast majority of that was some form of OpenStack, either a simple cloud or a complicated one. But when you think about it – what is a simple cloud? It’s easy to say that small amount of machines makes an easy, and large amount of machines makes a complicated cloud. But, that is not true. Complexity of a typical IaaS solution is pretty much determined by network complexity. Network, in all shapes and forms, from the underlay network to the customer’s overlay network requirements. I’ll try to explain how we deal with the underlay part in this blog.

It’s not a secret that a traditional tree like network architecture just doesn’t work for cloud environments. There are multiple reasons why; it doesn’t scale very well, it requires big OSI layer 2 domains and… well, it’s based on OSI layer 2. Debugging issues on that level is never a joyful experience. Therefore, for IaaS environments one really wants to do a modern design in a form of a spine-leaf architecture. Layer 3 spine-leaf architecture. This allows us to have bunch of smaller layer 2 domains, which then nicely correlate to availability zones, power zones, etc. However, managing environments with multiple layer 2 and therefore even more layer 3 domains requires a bit of rethinking. If we truly want to be effective in deploying and operating a cloud across multiple different layer 2 domains we need to think of the network in a bit more abstract mode. Luckily, this is nothing new.

In traditional approach to network, we talk about TORs, management fabric, BMC/OOB fabric, etc. These are most of the time layer 2 concepts. Fabric, after all, is a collection of switches. But the approach is correct; we should always talk about networks in abstract terms. Instead of talking about subnets and VLANs, we should talk about purpose of the network. This becomes important when we talk about spine-leaf architecture and multiple different subnets that serve the same purpose. In rack 1, subnet 172.16.1.0/24 is management network, but in rack 2, management network is on subnet 192.168.1.0/24, and so on. It’s obvious that it’s much nicer to abstract those subnets into a ‘management network’. Still, nothing new. We do this every day.

So… Why do our tools and applications still require us to use VLANs, subnets and IPs? If we deploy same application across different racks, why do we have to keep separate configurations for each of the units of the same application? What we really want is to have all of our Keystones listening on OpenStack Public API network, and not on subnets 192.168.10.0/24, 192.168.20.0/24 and 192.168.30.0/24. We end up thinking about application on a network, but we configure differently exact copies of the same application (units) on different subnets. Clearly our configuration tools are not doing what we want, but rather forcing us to transform our way of thinking into what those tools need. It’s a paradox that OpenStack is not that complicated, rather it’s made complicated by the tools used to deploy it.

While trying to solve this problem in our deployments at Canonical, we came up with concept of spaces. A space would be this abstracted network that we have in our heads, but somehow fail to put into our tools. Again, spaces are not a revolutionary concept in networking, they have been in our heads all this time. So, how do we implement spaces at Canonical?

We have grown concept of spaces across all of our tooling; MAAS, juju and charms. When we configure MAAS to manage our bare metal machines, we do not define networks as subnets or VLANs, we rather define networks as spaces. A space has a purpose, description and few other attributes. VLANs, and indirectly subnets too, become properties of the space, instead of other way around. This also means that when we deploy a machine, we deploy it connected to a space. When we deploy a machine, we usually do not deploy it on a specific network, but rather with specific requirements; must be able to talk to X, must have Y CPU and Z RAM. If you ever asked yourself why does it take so much time to rack and stack a server, it’s because of this disconnect of what we want and how we handle the configuration.

We’ve also enabled Juju to make this kind of requests – it asks MAAS for machines that is connected to a space, or set of spaces. It then exposes this spaces to charms, so that each charm knows what kind of networks this application has on its disposal. This allows us to do ‘juju deploy keystone –bind public=public-space -n3’; deploy three keystones, connect them to a public-space, a space defined in MAAS. What VLAN will that be, which subnet or an IP, we do not care; the charm will get information from Juju about these “low level” terms (VLANs, IPs). We humans do not think of VLANs and subnets and IPs; at best we think in OSI layer 1 terms.

Sounds a bit complicated? Let’s flip it the other way around. What I can do now is define my application as “3 units of keystone, which use internal network for SQL, public network for exposing API, internal network for OpenStack’s internal communication and is also exposed on OAM network for management purposes” and this is precisely how we deploy OpenStack. In fact, the Juju bundle looks like this:

keystone:
  charm: cs:keystone
  num_units: 3
  bindings:
    "": oam-space
    public: public-space
    internal: internal-space
    shared-db: internal-space

Those who follow OpenStack development will notice that something similar has landed in OpenStack recently; routed provider networks. It’s the same concept, solving the same problem. It’s nice to see how Juju uses this out of the box.

Big thanks to MAAS, Juju, charms and OpenStack communities for doing this. It allowed us to deploy complex applications with a breeze, and therefore shifted our focus to bigger picture, IaaS modeling and some other, new challenges!

Streaming the Norwegian ultimate championships

Planet Debian - Enj, 12/04/2018 - 1:36pd

As the Norwegian indoor frisbee season is coming to a close, the Norwegian ultimate nationals are coming up, too. Much like in Trøndisk 2017, we'll be doing the stream this year, replacing a single-camera Windows/XSplit setup with a multi-camera free software stack based on Nageru.

The basic idea is the same as in Trøndisk; two cameras (one wide and one zoomed) for the main action and two static ones above the goal zones. (The hall has more amenities for TV productions than the one in Trøndisk, so a basic setup is somewhat simpler.) But there are so many tweaks:

  • We've swapped out some of the cameras for more suitable ones; the DSLRs didn't do too well under the flicker of the fluorescent tubes, for instance, and newer GoPros have rectilinear modes). And there's a camera on the commentators now, with side-by-side view as needed.

  • There are tally lights on the two human-operated cameras (new Nageru feature).

  • We're doing CEF directly in Nageru (new Nageru feature) instead of through CasparCG, to finally get those 60 fps buttery smooth transitions (and less CPU usage!).

  • HLS now comes out directly of Cubemap (new Cubemap feature) instead of being generated by a shell script using FFmpeg.

  • Speaking of CPU usage, we now have six cores instead of four, for more x264 oomph (we wanted to do 1080p60 instead of 720p60, but alas, even x264 at nearly superfast can't keep up when there's too much motion).

  • And of course, a ton of minor bugfixes and improvements based on our experience with Trøndisk—nothing helps as much as battle-testing.

For extra bonus, we'll be testing camera-over-IP from Android for interviews directly on the field, which will be a fun challenge for the wireless network. Nageru does have support for taking in IP streams through FFmpeg (incidentally, a feature originally added for the now-obsolete CasparCG integration), but I'm not sure if the audio support is mature enough to run in production yet—most likely, we'll do the reception with a laptop and use that as a regular HDMI input. But we'll see; thankfully, it's a non-essential feature this time, so we can afford to have it break. :-)

Streaming starts Saturday morning CEST (UTC+2), will progress until late afternoon, and then restart on Sunday with the playoffs (the final starts at 14:05). There will be commentary in a mix of Norwegian and English depending on the mood of the commentators, so head over to www.plastkast.no if you want to watch :-) Exact schedule on the page.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Debian LTS work, March 2018

Planet Debian - Mër, 11/04/2018 - 10:41md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 2 hours from February. I worked 15 hours and will again carry over 2 hours to April.

I made another two releases on the Linux 3.2 longterm stable branch (3.2.100 and 3.2.101), the latter including mitigations for Spectre on x86. I rebased the Debian package onto 3.2.101 but didn't upload an update to Debian this month. We will need to add gcc-4.9 to wheezy before we can enable all the mitigations for Spectre variant 2.

Ben Hutchings https://www.decadent.org.uk/ben/blog Better living through software

Debian SecureBoot Sprint 2018

Planet Debian - Mër, 11/04/2018 - 5:01md

Monday morning I gave back the keys to Office Factory Fulda, who sponsored the location for the SecureBoot Sprint from Thursday, 4th April to Sunday, 8th April. Appearently we left a pretty positive impression (we managed to clean up), so are welcome again for future sprints.

The goal of this sprint was enabling SecureBoot in/for Debian, so that users who have SecureBoot enabled machines do not need to turn that off to be able to run Debian. That needs us to handle signing a certain set of packages in a defined way, handling it as automated as possible while ensuring that stuff is done in a safe/secure way.

Now add details like secure handling of keys, only signing pre-approved sets (to make abusing it harder), revocations, key rollovers, combine it all with the infrastructue and situation we have in Debian, say dak, buildd, security archive with somewhat different rules of visibility, reproducability, a huge set of architectures only some of which do SecureBoot, proper audit logging of signatures and you end up with 7 people from different teams taking the whole first day just discussing and hashing out a specification. Plus some joining in virtually.

I’m not going into actual details of all that, as a sprint report will follow soon.

Friday to Sunday was used for actual implementation of the agreed solution. The actual dak changes turned out to not be too large, and thankfully Ansgar was on them, so I could take time to push the FTPTeams move to the new Salsa service forward. I still have a few of our less-important repositories to move, but thats a simple process I will be doing during this week, the most important step was coming up with a sane way of using Salsa.

That does not mean the actual web interface, but getting code changes from there to the various Debian hosts we run our services on. In the past, we pushed the hosts directly, so all code changes appearing on them meant that someone who was in the right unix group on that machine made them appear.1 “Verified by ssh login” basically.

With Salsa, we now add a service that has a different set of administrators added on top. And a big piece of software too, with a huge possibility of bugs, worst case allowing random users access to our repositories. Which is a way larger problem area than “git push via ssh” as in the past, and as such more likely to be bad. If we blindly pull from a repository on such shared space, the confirmation “a FTPMaster said this code is good” is gone.

So it needs a way of adding that confirmation back, while still being able to use all the nice features that Salsa offers. Within Debian, whats better than using already established ways of trusting something, gnupg created signatures?!

So how to go forward? I have been lucky, i did not need to entirely invent this on my own, Enrico had similar concerns for the New-Maintainer web pages. He setup CI to test his stuff and, if successful, installs the tested stuff on the NM machine, provided that the commit is signed by a key from a defined set.

Unfortunately, for me, he deals with a Django app that listens somewhere and can be pushed to. No such thing for me, neither do I have Django nor do I have a service listening that I can tell about changes to fetch.

We also have to take care when a database schema upgrade needs to be done, no automatic deployment on database-using FTPMaster hosts for that, a human needs to trigger this.

So the actual implementation that I developed for us, and which is in use on all hosts that we maintain code on, is implemented in our standard framework for regular jobs, cronscript.2

It turns out to live in multiple files (as usual with cronscript), where the actual code is in deploy.functions, deploy.variables, and the order to call things is defined in deploy.tasks.

cronscript around it takes care to setup the environment and keep logs, and we now call the deploy every few minutes, securely getting our code deployed.

  1. Or someone abused root rights, but if you do not trust root, you lost anyways, and there is no reason to think that any DSA-member would do this. 

  2. A framework for FTPMaster scripts that ensures the same basic setup everywhere and makes it easy to call functions and stuff, with or without error checking, in background or foreground. ALso easy to restart in the middle of a script run after breakage, as it keeps track where it was. 

Joerg Jaspert https://blog.ganneff.de/ Ganneff's Little Blog

Preventing resume immediately after suspend on Dell Latitude 5580 (Debian testing)

Planet Debian - Mër, 11/04/2018 - 1:14md

I’ve installed Debian buster (testing at the time of writing) on a new Dell Latitude 5580 laptop, and one annoyance I’ve found is that the laptop would almost always resume as soon as it was suspended.

AFAIU, it seems the culprit is the network card (Ethernet controller: Intel Corporation Ethernet Connection (4) I219-LM) which would be configured with Wake-On-Lan (wol) set to the “magic packet” mode (ethtool enp0s31f6 | grep Wake-on would return ‘g’). One hint is that grep enabled /proc/acpi/wakeup returns GLAN.

There are many ways to change that for the rest of the session with a command like ethtool -s enp0s31f6 wol d.

But I had a hard time figuring out if there was a preferred way to make this persistant among the many hits in so many tutorials and forum posts.

My best hit so far is to add the a file named /etc/systemd/network/50-eth0.link containing :

[Match] Driver=e1000e [Link] WakeOnLan=off

The driver can be found by checking udev settings as reported by udevadm info -a /sys/class/net/enp0s31f6

There are other ways to do that with systemd, but so far it seems to be working for me. Hth,

Olivier Berger https://www-public.tem-tsp.eu/~berger_o/weblog debian-en – WebLog Pro Olivier Berger

Bread and data

Planet Debian - Mër, 11/04/2018 - 11:01pd

For the past two weeks I've mostly been baking bread. I'm not sure what made me decide to make some the first time, but it actually turned out pretty good so I've been doing every day or two ever since.

This is the first time I've made bread in the past 20 years or so - I recall in the past I got frustrated that it never rose, or didn't turn out well. I can't see that I'm doing anything differently, so I'll just write it off as younger-Steve being daft!

No doubt I'll get bored of the delicious bread in the future, but for the moment I've got a good routine going - juggling going to the shops, child-care, and making bread.

Bread I've made includes the following:

Beyond that I've spent a little while writing a simple utility to embed resources in golang projects, after discovering the tool I'd previously been using, go-bindata, had been abandoned.

In short you feed it a directory of files and it will generate a file static.go with contents like this:

files[ "data/index.html" ] = "<html>.... files[ "data/robots.txt" ] = "User-Agent: * ..."

It's a bit more complex than that, but not much. As expected getting the embedded data at runtime is trivial, and it allows you to distribute a single binary even if you want/need some configuration files, templates, or media to run.

For example in the project I discussed in my previous post there is a HTTP-server which serves a user-interface based upon bootstrap. I want the HTML-files which make up that user-interface to be embedded in the binary, rather than distributing them seperately.

Anyway it's not unique, it was a fun experience writing, and I've switched to using it now:

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Launchpad News: Launchpad security advisory: cross-site-scripting in site search

Planet Ubuntu - Mër, 11/04/2018 - 10:40pd
Summary

Mohamed Alaa reported that Launchpad’s Bing site search implementation had a cross-site-scripting vulnerability.  This was introduced on 2018-03-29, and fixed on 2018-04-10.  We have not found any evidence of this bug being actively exploited by attackers; the rest of this post is an explanation of the problem for the sake of transparency.

Details

Some time ago, Google announced that they would be discontinuing their Google Site Search product on 2018-04-01.  Since this served as part of the backend for Launchpad’s site search feature (“Search Launchpad” on the front page), we began to look around for a replacement.  We eventually settled on Bing Custom Search, implemented appropriate support in Launchpad, and switched over to it on 2018-03-29.

Unfortunately, we missed one detail.  Google Site Search’s XML API returns excerpts of search results as pre-escaped HTML, using <b> tags to indicate where search terms match.  This makes complete sense given its embedding in XML; it’s hard to see how that API could do otherwise.  The Launchpad integration code accordingly uses TAL code along these lines, using the structure keyword to explicitly indicate that the excerpts in question do not require HTML-escaping (like most good web frameworks, TAL’s default is to escape all variable content, so successful XSS attacks on Launchpad have historically been rare):

<div class="summary" tal:content="structure page/summary" />

However, Bing Custom Search’s JSON API returns excerpts of search results without any HTML escaping.  Again, in the context of the API in question, this makes complete sense as a default behaviour (though a textFormat=HTML switch is available to change this); but, in the absence of appropriate handling, this meant that those excerpts were passed through to the TAL code above without escaping.  As a result, if you could craft search terms that match a portion of an existing page on Launchpad that shows scripting tags (such as a bug about an XSS vulnerability in another piece of software hosted on Launchpad), and convince other people to follow a suitable search link, then you could cause that code to be executed in other users’ browsers.

The fix was, of course, to simply escape the data returned by Bing Custom Search.  Thanks to Mohamed Alaa for their disclosure.

DRM, DRM, oh how I hate DRM...

Planet Debian - Mër, 11/04/2018 - 6:43pd

I love flexibility. I love when the rules of engagement are not set in stone and allow us to lead a full, happy, simple life. (Apologies to Felipe and Marianne for using their very nice sculpture for this rant. At least I am not desperately carrying a brick! ☺)

I have been very, very happy after I switched to a Thinkpad X230. This is the first computer I have with an option for a cellular modem, so after thinking it a bit, I got myself one:

After waiting for a couple of weeks, it arrived in a nonexciting little envelope straight from Hong Kong. If you look closely, you can even appreciate there's a line (just below the smaller barcode) that reads "Lenovo"). I soon found how to open this laptop (kudos to Lenovo for a very sensible and easy opening process, great documentation... So far, it's the "openest" computer I have had!) and installed my new card!

The process was decently easy, and after patting myself in the back, I eagerly turned on my computer... Only to find the BIOS to halt with the following message:

1802: Unauthorized network card is plugged in - Power off and remove the miniPCI network card (1199/6813). System is halted

So... Got everything back to its original state. Stupid DRM in what I felt the openest laptop I have ever had. Gah.

Anyway... As you can see, I have a brand new cellular modem. I am willing to give it to the first person that offers me a nice beer in exchange, here in Mexico or wherever you happen to cross my path (just tell me so I bring the little bugger along!)

Of course, I even tried to get one of the nice volunteers to install Libreboot in my computer now that I was to Libreplanet, which would have solved the issue. But they informed me that Libreboot is supported only in the (quite a bit older) X200 machines, not in the X230.

AttachmentSize IMG_20180409_225503.jpg1003.02 KB IMG_20180409_225835.jpg1.77 MB IMG_20180409_230000.jpg113.36 KB IMG_20180409_225835.jpg1.77 MB IMG_20180408_085157.jpg3.44 MB gwolf http://gwolf.org Gunnar Wolf

Jorge Castro: Kubernetes Ask Me Anything on Reddit

Planet Ubuntu - Mar, 10/04/2018 - 2:00pd

A bunch of Kubernetes developers are doing an Ask Me Anything today on Reddit if you’re interested in asking any questions, hope to see you there!

My Free Software Activities in March 2018

Planet Debian - Hën, 09/04/2018 - 11:58md

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games Debian Java
  • I spent most of my free time on Java packages because…OpenJDK 9 is now the default Java runtime environment in Debian! As of today I count 319 RC bugs (bugs with severity normal would be serious today as well) of which 227 are already resolved. That means one third of the Java team’s packages have to be adjusted for the new OpenJDK version. Java 9 comes with a new module system called Jigsaw. Undoubtedly it represents a lot of new interesting ideas but it is also a major paradigm shift. For us mere packagers it means more work than any other version upgrade in the past. Let’s say we are a handful of regular contributors (I’m generous) and we spend most of our time to stabilize the Java ecosystem in Debian to the point that we can build all of our packages again. Repeat for every new Debian release. Unfortunately not much time is actually spent on packaging new and cool applications or libraries unless they are strictly required to fix a specific Java 9 issue. It just doesn’t feel right at the moment. Most upstreams are rather indifferent or relaxed when it comes to porting their applications to Java 9 because they still can use Java 8, so why can’t we? They don’t have to provide security support for five years and can make the switch to Java 9 much later. They can also cherry-pick certain versions of libraries whereas we have to ensure that everything works with one specific version of a library. But that’s not all: Java 9 will not be shipped with Buster and we even aim for OpenJDK 11! Releases of OpenJDK will be more frequent from now on, expect a new release every six months, and there are certain versions which will receive extended security support like OpenJDK 11. One thing we can look forward to: Apparently more commercial features of Oracle JDK will be merged into OpenJDK and it appears the longterm goal is to make Oracle JDK and OpenJDK builds completely interchangeable. So maybe one day only one free software JDK for everything and everyone? I hope so.
  • I worked on the following packages to address Java 9 or other bugs: activemq, snakeyaml, libjchart2d-java, jackson-dataformat-yaml, jboss-threads, jboss-logmanager, jboss-logging-tools, qdox2, wildfly-common, activemq-activeio, jackson-datatype-joda, antlr, axis, libitext5-java, libitext1-java, libitext-java, jedit, conversant-disruptor, beansbinding, cglib, undertow, entagged, jackson-databind, libslf4j-java, proguard, libhtmlparser-java, libjackson-json-java and sweethome3d (patch by Emmanuel Bourg)
  • New upstream versions: jboss-threads, okio, libokhttp-java, snakeyaml, robocode.
  • I NMUed jtb and applied a patch from Tiago Stürmer Daitx.
Debian LTS

This was my twenty-fifth month as a paid contributor and I have been paid to work 23,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 19.03.2018 until 25.03.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in imagemagick, libvirt, freeplane, exempi, calibre, gpac, ipython, binutils, libraw, memcached, mosquitto, sdl-image1.2, slurm-llnl, graphicsmagick, libslf4j-java, radare2, sam2p, net-snmp, apache2, ldap-account-manager, librelp, ruby-rack-protection, libvncserver, zsh and xerces-c.
  • DLA-1310-1. Issued a security update for exempi fixing 6 CVE.
  • DLA-1315-1. Issued a security update for libvirt fixing 2 CVE.
  • DLA-1316-1. Issued a security update for freeplane fixing 1 CVE.
  • DLA-1322-1. Issued a security update for graphicsmagick fixing 6 CVE.
  • DLA-1325-1. Issued a security update for drupal7 fixing 1 CVE.
  • DLA-1326-1. Issued a security update for php5 fixing 1 CVE.
  • DLA-1328-1. Issued a security update for xerces-c fixing 1 CVE.
  • DLA-1335-1. Issued a security update for zsh fixing 2 CVE.
  • DLA-1340-1. Issued a security update for sam2p fixing 5 CVE. I also prepared a security update for Jessie. (#895144)
  • DLA-1341-1. Issued a security update for sdl-image1.2 fixing 6 CVE.
Misc
  • I triaged all open bugs in imlib2 and forwarded the issues upstream. The current developer of imlib2 was very responsive and helpful. Thanks to Kim Woelders several longstanding bugs could be fixed.
  • There was also a new upstream release for xarchiver. Check it out!

Thanks for reading and see you next time.

Apo https://gambaru.de/blog planetdebian – gambaru.de

Lubuntu Blog: This Week in Lubuntu Development #2

Planet Ubuntu - Hën, 09/04/2018 - 6:00md
Here is the second issue of This Week in Lubuntu Development. You can read last week's issue here. Changes General We released 18.04 Final Beta this week. You can find the announcement here. The encrypted LVM bug we described last week has been fixed (thanks to Steve Langasek!). We are still working hard to try […]

Faqet

Subscribe to AlbLinux agreguesi