You are here

Site në gjuhë të huaj

Beyond Bitcoin: How Business Can Capitalize On Blockchains

Slashdot.org - Mar, 01/09/2015 - 4:07pd
snydeq writes: Bitcoin's widely trusted ledger offers intriguing possibilities for business use beyond cryptocurrency, writes InfoWorld's Peter Wayner. "From the beginning, bitcoin has assumed a shadowy, almost outlaw mystique," Wayner writes. "Even the mathematics of the technology are inscrutable enough to believe the worst. The irony is that the mathematical foundations of bitcoin create a solid record of legitimate ownership that may be more ironclad against fraud than many of the systems employed by businesses today. Plus, the open, collaborative way in which bitcoin processes transactions ensures the kind of network of trust that is essential to any business agreement."

Read more of this story at Slashdot.

Norbert Preining: PiwigoPress release 2.31

Planet Debian - Mar, 01/09/2015 - 2:36pd

I just pushed a new release of PiwigoPress (main page, WordPress plugin dir) to the WordPress servers. This release incorporates new features for the sidebar widget, and better interoperability with some Piwigo galleries.

The new features are:

  • Selection of images: Up to now images for the widget were selected at random. The current version allows selecting images either at random (the default as before), but also in ascending or descending order by various criteria (uploaded, availability time, id, name, etc). With this change it is now possible to always display the most recent image(s) from a gallery.
  • Interoperability: Some Piwigo galleries don’t have thumbnail sized representatives. For these galleries the widget was broken and didn’t display any image. We now check for either square or thumbnail.

That’s all, enjoy, and leave your wishlist items and complains at the issue tracker on the github project piwigopress.

Nearly Every Seabird May Be Eating Plastic By 2050

Slashdot.org - Mar, 01/09/2015 - 2:18pd
sciencehabit writes: According to a new study almost every ocean-foraging species of birds may be eating plastic by 2050. In the five large ocean areas known as "garbage patches," each square kilometer of surface water holds almost 600,000 pieces of debris. Sciencemag reports: "By 2050, about 99.8% of the species studied will have eaten plastic, the researchers report online today in the Proceedings of the National Academy of Sciences. Consuming plastic can cause myriad problems, Wilcox says. For example, some types of plastics absorb and concentrate environmental pollutants, he notes. After ingestion, those chemicals can be released into the birds’ digestive tracts, along with chemicals in the plastics that keep them soft and pliable. But plastic bits aren’t always pliable enough to get through a gull’s gut. Most birds have trouble passing large bits of plastic, and they build up in the stomach, sometimes taking up so much room that the birds can’t consume enough food to stay healthy."

Read more of this story at Slashdot.

Apple Partners With Cisco To Boost Enterprise Business

Slashdot.org - Mar, 01/09/2015 - 1:35pd
An anonymous reader writes: Apple and Cisco announced a partnership aimed at helping Apple's devices work better for businesses. Cisco will provide services specially optimized for iOS devices across mobile, cloud, and on premises-based collaboration tools such as Cisco Spark, Cisco Telepresence and Cisco WebEx, the companies said in a statement. "What makes this new partnership unique is that our engineering teams are innovating together to build joint solutions that our sales teams and partners will take jointly to our customers," Cisco Chief Executive Chuck Robbins said in a blog post.

Read more of this story at Slashdot.

World's Most Powerful Digital Camera Sees Construction Green Light

Slashdot.org - Mar, 01/09/2015 - 12:53pd
An anonymous reader writes: The Department of Energy has approved the construction of the Large Synoptic Survey Telecscope's 3.2-gigapixel digital camera, which will be the most advanced in the world. When complete the camera will weigh more than three tons and take such high resolution pictures that it would take 1,500 high-definition televisions to display one of them. According to SLAC: "Starting in 2022, LSST will take digital images of the entire visible southern sky every few nights from atop a mountain called Cerro Pachón in Chile. It will produce a wide, deep and fast survey of the night sky, cataloging by far the largest number of stars and galaxies ever observed. During a 10-year time frame, LSST will detect tens of billions of objects—the first time a telescope will observe more galaxies than there are people on Earth – and will create movies of the sky with unprecedented details. Funding for the camera comes from the DOE, while financial support for the telescope and site facilities, the data management system, and the education and public outreach infrastructure of LSST comes primarily from the National Science Foundation (NSF)."

Read more of this story at Slashdot.

Seif Lotfy: Counting flows (Semi-evaluation of CMS, CML and PMC)

Planet GNOME - Mar, 01/09/2015 - 12:37pd

Assume we have a stream of events coming in one at a time, and we need to count the frequency of the different types of events in the stream.

In other words: We are receiving fruits one at a time in no given order, and at any given time we need to be able to answer how many of a specific fruit did we receive.

The most naive implementation is a dictionary in the form of , and is most accurate and suitable for streams with limited types of events.

Lets assume a uinque item consists of 15 bytes and has a dedicated uint32 (4 bytes) counter assigned to it.

At 10 million uinque items we end up using 19 MB which is a bit much, but on the plus side its as accurate as it gets.

But what if we don't have the 19 MB. Or what if we have to keep track of several streams?

Maybe saving to a DB? Well when querying the DB upon request, something in the lines of:

SELECT count(event) WHERE event = ?)

The more items we add, the more resource intensive the query becomes.

Thankfully solutions come in the form of Probabalistic datastructures (sketches).

I wont get into details but to solve this problem I semi-evaluated the following datastructures:

  • Count-Min sketch (CMS) [2]
  • Count-Min-Log sketch (CML) [1][2]
  • Probabilistic Multiplicity Counting sketch (PMC) [1]

Test details:

For each sketch I linearly added a new flow with the equivilantly linear events. So the first flow got 1 event inserted. The second flow for 2 events inserted, all the way up to 10k-th flow with 10k events inserted.

flow 1: 1 event flow 2: 2 events ... flow 10000: 10000 events

All three datastructures were configured to have a size of 217KB (exactly 1739712 bits).

After a couple dozen runs yielded the following results (based on my unoptimized code esp. for PMC and CML)

CMS: 07s for 50005000 insertion (fill rate: 31%) CML: 42s for 50005000 insertion (fill rate: 09%) PMC: 18s for 50005000 insertion (fill rate: 54%)

CMS with ɛ: 0.0001, δ: 0.99 (code)

Observer the biased estimation of CMS. CMS will never underestimate. In our case looking at the top border of the diagram we can see the there was a lot of overestimation.

CML with ɛ: 0.000025, δ: 0.99 (16-bit counters) (code)

Just like CMS, CML is also biased and will never underestimate. However unlike CMS the top border of the diagram is less noisy. Yet accuracy seems to be decreasing for the high counting flows.

PMC with (256x32) virtual metrices (code)

Unlike the previous two sketches. This sketch is unbiased, so underestimations exist. Also the estimate flow count increases with the actual flow count (linearly bigger errors). The drawback here is that PMC gets filled up very quickly which means at some point it will just have everything overestimated. It is recommended to know what the max num of different flows will be beforehand.

Bringing it all together

So what do you think. If you are familiar wih these algorithms or can propose a different benchmarking scenario, please comment. I might be able to work on that on a weekend. The code was all written in Go, feel free to suggest optimizations of fix any bugs found (links above respective plots).

Seif Lotfy: Counting flows (Semi-evaluation of CMS, CML and PMC)

Planet UBUNTU - Mar, 01/09/2015 - 12:25pd

Assume we have a stream of events coming in one at a time, and we need to count the frequency of the different types of events in the stream.

In other words: We are receiving fruits one at a time in no given order, and at any given time we need to be able to answer how many of a specific fruit did we receive.

The most naive implementation is a dictionary in the form of , and is most accurate and suitable for streams with limited types of events.

Lets assume a uinque item consists of 15 bytes and has a dedicated uint32 (4 bytes) counter assigned to it.

At 10 million uinque items we end up using 19 MB which is a bit much, but on the plus side its as accurate as it gets.

But what if we don't have the 19 MB. Or what if we have to keep track of several streams?

Maybe saving to a DB? Well when querying the DB upon request, something in the lines of:

SELECT count(event) WHERE event = ?)

The more items we add, the more resource intensive the query becomes.

Thankfully solutions come in the form of Probabalistic datastructures (sketches).

I wont get into details but to solve this problem I semi-evaluated the following datastructures:

  • Count-Min sketch (CMS) [2]
  • Count-Min-Log sketch (CML) [1][2]
  • Probabilistic Multiplicity Counting sketch (PMC) [1]

Test details:

For each sketch I linearly added a new flow with the equivilantly linear events. So the first flow got 1 event inserted. The second flow for 2 events inserted, all the way up to 10k-th flow with 10k events inserted.

flow 1: 1 event flow 2: 2 events ... flow 10000: 10000 events

All three datastructures were configured to have a size of 217KB (exactly 1739712 bits).

After a couple dozen runs yielded the following results (based on my unoptimized code esp. for PMC and CML)

CMS: 07s for 50005000 insertion (fill rate: 31%) CML: 42s for 50005000 insertion (fill rate: 09%) PMC: 18s for 50005000 insertion (fill rate: 54%)

CMS with ɛ: 0.0001, δ: 0.99 (code)

Observer the biased estimation of CMS. CMS will never underestimate. In our case looking at the top border of the diagram we can see the there was a lot of overestimation.

CML with ɛ: 0.000025, δ: 0.99 (16-bit counters) (code)

Just like CMS, CML is also biased and will never underestimate. However unlike CMS the top border of the diagram is less noisy. Yet accuracy seems to be decreasing for the high counting flows.

PMC with (256x32) virtual metrices (code)

Unlike the previous two sketches. This sketch is unbiased, so underestimations exist. Also the estimate flow count increases with the actual flow count (linearly bigger errors). The drawback here is that PMC gets filled up very quickly which means at some point it will just have everything overestimated. It is recommended to know what the max num of different flows will be beforehand.

Bringing it all together

So what do you think. If you are familiar wih these algorithms or can propose a different benchmarking scenario, please comment. I might be able to work on that on a weekend. The code was all written in Go, feel free to suggest optimizations of fix any bugs found (links above respective plots).

Kubuntu: Kubuntu Team Launches Plasma Mobile References Images

Planet UBUNTU - Mar, 01/09/2015 - 12:09pd

The Kubuntu team is proud to announce the references images for Plasma Mobile.

Plasma Mobile was announced today at KDE’s Akademy conference.

Our images can be installed on a Nexus 5 phone.

More information on Plasma Mobile’s website.

How Artificial Intelligence Can Fight Air Pollution In China

Slashdot.org - Mar, 01/09/2015 - 12:09pd
An anonymous reader writes: IBM is testing a new way to help fix Beijing's air pollution problem with artificial intelligence. Like many other cities across the country, the capital is surrounded by many coal burning factories. However, the air quality on a day-to-day basis can vary because of a number of reasons like industrial activity, traffic congestion, and the weather. IBM is testing a computer system capable of learning to predict the severity of air pollution several days in advance using large quantities of data from several different models. "We have built a prototype system which is able to generate high-resolution air quality forecasts, 72 hours ahead of time," says Xiaowei Shen, director of IBM Research China. "Our researchers are currently expanding the capability of the system to provide medium- and long-term (up to 10 days ahead) as well as pollutant source tracking, 'what-if' scenario analysis, and decision support on emission reduction actions."

Read more of this story at Slashdot.

Junichi Uekawa: I've been writing ELF parser for fun using C++ templates to see how much I can simplify.

Planet Debian - Hën, 31/08/2015 - 11:36md
I've been writing ELF parser for fun using C++ templates to see how much I can simplify. I've been reading bionic loader code enough these days that I already know how it would look like in C and gradually converted into C++ but it's nevertheless fun to have a pet project that kind of grows.

Netflix Is Becoming Just Another TV Channel

Slashdot.org - Hën, 31/08/2015 - 11:25md
An anonymous reader writes: Netflix revealed in a blog post that it will not renew its contract with Epix, meaning you won't be able to watch movies like The Hunger Games and World War Z through the service anymore. With the increase in cord-cutters and more original content, Netflix is positioning itself to be like any other TV channel (one that owns its own distribution model) and is betting that customers won't miss the Epix content. Chief Content Officer Ted Sarandos says, "While many of these movies are popular, they are also widely available on cable and other subscription platforms at the same time as they are on Netflix and subject to the same drawn out licensing periods."

Read more of this story at Slashdot.

Martin Albisetti: Developing and scaling Ubuntu One filesync, part 1

Planet UBUNTU - Hën, 31/08/2015 - 11:17md

Now that we've open sourced the code for Ubuntu One filesync, I thoughts I'd highlight some of the interesting challenges we had while building and scaling the service to several million users.

The teams that built the service were roughly split into two: the foundations team, who was responsible for the lowest levels of the service (storage and retrieval of files, data model, client and server protocol for syncing) and the web team, focused on user-visible services (website to manage files, photos, music streaming, contacts and Android/iOS equivalent clients).
I joined the web team early on and stayed with it until we shut it down, so that's where a lot of my stories will be focused on.

Today I'm going to focus on the challenge we faced when launching the Photos and Music streaming services. Given that by the time we launched them we had a few years of experience serving files at scale, our challenge turned out to be in presenting and manipulating the metadata quickly to each user, and be able to show the data in appealing ways to users (showing music by artist, genre and searching, for example). Photos was a similar story, people tended to have many thousands of photos and songs and we needed to extract metadata, parse it, store it and then be able to present it back to users quickly in different ways. Easy, right? It is, until a certain scale 
Our architecture for storing metadata at the time was about 8 PostgreSQL master databases where we sharded metadata across (essentially your metadata lived on a different DB server depending on your user id) plus at least one read-only slave per shard. These were really beefy servers with a truck load of CPUs, more than 128GB of RAM and very fast disks (when reading this, remember this was 2009-2013, hardware specs seem tiny as time goes by!).  However, no matter how big these DB servers got, given how busy they were and how much metadata was stored (for years, we didn't delete any metadata, so for every change to every file we duplicated the metadata) after a certain time we couldn't get a simple listing of a user's photos or songs (essentially, some of their files filtered by mimetype) in a reasonable time-frame (less than 5 seconds). As it grew we added caches, indexes, optimized queries and code paths but we quickly hit a performance wall that left us no choice but a much feared major architectural change. I say much feared, because major architectural changes come with a lot of risk to running services that have low tolerance for outages or data loss, whenever you change something that's already running in a significant way you're basically throwing out most of your previous optimizations. On top of that as users we expect things to be fast, we take it for granted. A 5 person team spending 6 months to make things as you expect them isn't really something you can brag about in the middle of a race with many other companies to capture a growing market.
In the time since we had started the project, NoSQL had taken off and matured enough for it to be a viable alternative to SQL and seemed to fit many of our use cases much better (webscale!). After some research and prototyping, we decided to generate pre-computed views of each user's data in a NoSQL DB (Cassandra), and we decided to do that by extending our existing architecture instead of revamping it completely. Given our code was pretty well built into proper layers of responsibility we hooked up to the lowest layer of our code,-database transactions- an async process that would send messages to a queue whenever new data was written or modified. This meant essentially duplicating the metadata we stored for each user, but trading storage for computing is usually a good trade-off to make, both in cost and performance. So now we had a firehose queue of every change that went on in the system, and we could build a separate piece of infrastructure who's focus would only be to provide per-user metadata *fast* for any type of file so we could build interesting and flexible user interfaces for people to consume back their own content. The stated internal goals were: 1) Fast responses (under 1 second), 2) Less than 10 seconds between user action and UI update and 3) Complete isolation from existing infrastructure.
Here's a rough diagram of how the information flowed throw the system:

It's a little bit scary when looking at it like that, but in essence it was pretty simple: write each relevant change that happened in the system to a temporary table in PG in the same transaction that it's written to the permanent table. That way you get transactional guarantees that you won't loose any data on that layer for free and use PG's built in cache that keeps recently added records cheaply accessible.
Then we built a bunch of workers that looked through those rows, parsed them, sent them to a persistent queue in RabbitMQ and once it got confirmation it was queued it would delete it from the temporary PG table.
Following that we took advantage of Rabbit's queue exchange features to build different types of workers that processes the data differently depending on what it was (music was stored differently than photos, for example).
Once we completed all of this, accessing someone's photos was a quick and predictable read operation that would give us all their data back in an easy-to-parse format that would fit in memory. Eventually we moved all the metadata accessed from the website and REST APIs to these new pre-computed views and the result was a significant reduction in load on the main DB servers, while now getting predictable sub-second request times for all types of metadata in a horizontally scalable system (just add more workers and cassandra nodes).

All in all, it took about 6 months end-to-end, which included a prototype phase that used memcache as a key/value store.

You can see the code that wrote and read from the temporary PG table if you branch the code and look under: src/backends/txlog/
The worker code, as well as the web ui is still not available but will be in the future once we finish cleaning it up to make it available. I decided to write this up and publish it now because I believe the value is more in the architecture rather than the code itself  

Michael Meeks: 2015-08-31 Monday.

Planet GNOME - Hën, 31/08/2015 - 11:00md
  • Up very early, mail chew - partner call; team call - great to have Kendy & Miklos back. Reviewed some fixes for 5.0.2 read bugs etc. Product team call; slogged away until late on GDI and GL resource synchronization.

The Long Reach of Windows 95

Slashdot.org - Hën, 31/08/2015 - 10:42md
jfruh writes: I'm a Mac guy — have been ever since the '80s. When Windows 95 was released 20 years ago, I was among those who sneered that "Windows 95 is Macintosh 87." But now, as I type these words on a shiny new iMac, I can admit that my UI — and indeed the computing landscape in general — owes a lot to Windows 95, the most influential operating system that ever got no respect. ITWorld reports: "... even though many techies tend to dismiss UI innovation as eye candy, the fact is that the changes made in Windows 95 were incredibly successful in making the the system more accessible to users -- so successful, in fact, that a surprising number of them have endured and even spread to other operating systems. We still live in the world Windows 95 made. When I asked people on Twitter their thoughts about what aspects of Windows 95 have persisted, I think Aaron Webb said it best: 'All of it? Put a 15 year old in front of 3.1 and they would be lost. In front of Windows 95 they would be able to do any task quickly.'"

Read more of this story at Slashdot.

Nikhar Agrawal: Why Everyone Needs to Attend GUADEC!

Planet GNOME - Hën, 31/08/2015 - 10:09md

This photo right here sums up one the major reasons you should attend GUADEC.

The above pic was taken on the final day of the core conference days after an incredible evening of fun filled outing in the park. It’s just dinner, right? What makes this so special? That’s people from more than 10 different countries sitting on that table, give or take. There’s Indonesia, Peru, India, Italy, Spain, USA, Canada, Czech Republic, Sweden, China, Korea  and possibly more that I’ve missed.

Apart from all the fantastic technical talks, what made this GUADEC really special and an enriching experience for me is the range of conversations I had with people. I talked about the Brazilian and Indian caste system with an amazing guy from Southern Brazil. I asked my Chinese friends about their views on Tibet and the religion and political scene in their country. During the dinner in the photo above, we conversed about a whole range of things from vegetarianism to situation in Korea to fixed-point arithmetic.

Just before the aforementioned dinner, we had an evening full of games where we did everything from sack race to balloon volleyball to popping balloons with our bums.  That was probably one of the most fun evenings I had spent in a long time. The games were organised in Slottskogen, a lush green park, in the beautiful beautiful city of Gothenburg.

That brings me to talking about Sweden. I fell in love with the place. The gorgeous archipelago, the thrilling Liseberg and such helpful people. I remember once asking a kind stranger for some help and we ended up talking about his family and his search for a new summer home.

But that’s enough about travel. How would this be a post about GUADEC without actually talking about the conference. I think the highlight of the conference for me would have to be the talk, ‘How the Power of Community Prevails — It’s Not Only About the Code‘. It was a rousing talk to say the least. For those of us who were not fully of the situation, it was a thorough guide. While the case itself made for some pretty intriguing stuff, what really moved me was how the community as a whole stood up to a big corporation and how the power of people prevailed. I’m pretty sure that at one point during the talk I had goosebumps. Also, it helped that the talk was pretty peppered with humour.

A couple of other talks I particularly remember were ‘Adapting GNOME for use in the Developing World’ and ‘Fast and effective application design: live!’. The live part of the second talk was in particular fun. It was great to see everyone present in the room chipping in with their ideas and debating and arguing about the scope of application.

All in all, it was a wonderful experience, a trip that I’m going to remember for a long long time. And it wouldn’t have been possible without the generous sponsorship of the Gnome Foundation. Thanks a lot for giving me this splendid opportunity.


3 Category 4 Hurricanes Develop In the Pacific At Once For the First Time

Slashdot.org - Hën, 31/08/2015 - 10:00md
Kristine Lofgren writes: For the first time in recorded history, three Category 4 hurricanes were seen in the Pacific Ocean at the same time. Climatologists have been warning that climate change may produce more extreme weather situations, and this may be a peek at the future to come. Eric Blake, a specialist with the National Hurricane Center summed it up with a tweet: "Historic central/eastern Pacific outbreak- 3 major hurricanes at once for the first time on record!"

Read more of this story at Slashdot.

The Most Important Obscure Languages?

Slashdot.org - Hën, 31/08/2015 - 9:17md
Nerval's Lobster writes: If you're a programmer, you're knowledgeable about "big" languages such as Java and C++. But what about those little-known languages you only hear about occasionally? Which ones have an impact on the world that belies their obscurity? Erlang (used in high-performance, parallel systems) springs immediately to mind, as does R, which is relied upon my mathematicians and analysts to crunch all sorts of data. But surely there are a handful of others, used only by a subset of people, that nonetheless inform large and important platforms that lots of people rely upon... without realizing what they owe to a language that few have ever heard of.

Read more of this story at Slashdot.

Book Review: Effective Python: 59 Specific Ways To Write Better Python

Slashdot.org - Hën, 31/08/2015 - 8:34md
MassDosage writes: If you are familiar with the "Effective" style of books then you probably already know how this book is structured. If not here's a quick primer: the book consists of a number of small sections each of which focus on a specific problem, issue or idea and these are discussed in a "here's the best way to do X" manner. These sections are grouped into related chapters but can be read in pretty much any order and generally don't depend on each other (and when they do this will be called out in the text). The idea is that you can read the book from cover to cover if you want but you can also just dip in and out and read only the sections that are of interest to you. This also means that you can use the book as a reference in future when you inevitably forget the details or want to double check something. Read below for the rest of Mass Dosage's review.

Read more of this story at Slashdot.

Cities Wasting Millions of Taxpayer's Money In Failed IoT Pilots

Slashdot.org - Hën, 31/08/2015 - 7:51md
dkatana writes: Two years ago at the Smart Cities Expo World Congress, Antoni Vives, then Barcelona's second deputy mayor, said he refused to have more technology pilots in the city: "I hate pilots, if anyone of you [technology companies] comes to me selling a pilot, just get away, I don't want to see you." He added, "I am fed up with the streets full of devices. It is a waste of time, a waste of money, and doesn't deliver anything; it is just for the sake of selling something to the press and it does not work." Barcelona is already a leading city in the use of IoT and, according to Fortune, "The most wired city in the world". Over the past 10 years, the city has experienced a surge in the number of sensors, data collection devices and automation and has become "a showcase for the smart metropolis of the future". Over the past few years technology companies have sold pilot programs costing millions of dollars to cities all over the world, claiming it will enhance their "Smart City" rating. Unfortunately, after the initial buzz, many of those pilots never get beyond the evaluation stage and are abandoned because the cities cannot afford them in the first place.

Read more of this story at Slashdot.

Matthew Garrett: Working with the kernel keyring

Planet GNOME - Hën, 31/08/2015 - 7:20md
The Linux kernel keyring is effectively a mechanism to allow shoving blobs of data into the kernel and then setting access controls on them. It's convenient for a couple of reasons: the first is that these blobs are available to the kernel itself (so it can use them for things like NFSv4 authentication or module signing keys), and the second is that once they're locked down there's no way for even root to modify them.

But there's a corner case that can be somewhat confusing here, and it's one that I managed to crash into multiple times when I was implementing some code that works with this. Keys can be "possessed" by a process, and have permissions that are granted to the possessor orthogonally to any permissions granted to the user or group that owns the key. This is important because it allows for the creation of keyrings that are only visible to specific processes - if my userspace keyring manager is using the kernel keyring as a backing store for decrypted material, I don't want any arbitrary process running as me to be able to obtain those keys[1]. As described in keyrings(7), keyrings exist at the session, process and thread levels of granularity.

This is absolutely fine in the normal case, but gets confusing when you start using sudo. sudo by default doesn't create a new login session - when you're working with sudo, you're still working with key posession that's tied to the original user. This makes sense when you consider that you often want applications you run with sudo to have access to the keys that you own, but it becomes a pain when you're trying to work with keys that need to be accessible to a user no matter whether that user owns the login session or not.

I spent a while talking to David Howells about this and he explained the easiest way to handle this. If you do something like the following:
$ sudo keyctl add user testkey testdata @u
a new key will be created and added to UID 0's user keyring (indicated by @u). This is possible because the keyring defaults to 0x3f3f0000 permissions, giving both the possessor and the user read/write access to the keyring. But if you then try to do something like:
$ sudo keyctl setperm 678913344 0x3f3f0000
where 678913344 is the ID of the key we created in the previous command, you'll get permission denied. This is because the default permissions on a key are 0x3f010000, meaning that the possessor has permission to do anything to the key but the user only has permission to view its attributes. The cause of this confusion is that although we have permission to write to UID 0's keyring (because the permissions are 0x3f3f0000), we don't possess it - the only permissions we have for this key are the user ones, and the default state for user permissions on new keys only gives us permission to view the attributes, not change them.

But! There's a way around this. If we instead do:
$ sudo keyctl add user testkey testdata @s
then the key is added to the current session keyring (@s). Because the session keyring belongs to us, we possess any keys within it and so we have permission to modify the permissions further. We can then do:
$ sudo keyctl setperm 678913344 0x3f3f0000
and it works. Hurrah! Except that if we log in as root, we'll be part of another session and won't be able to see that key. Boo. So, after setting the permissions, we should:
$ sudo keyctl link 678913344 @u
which ties it to UID 0's user keyring. Someone who logs in as root will then be able to see the key, as will any processes running as root via sudo. But we probably also want to remove it from the unprivileged user's session keyring, because that's readable/writable by the unprivileged user - they'd be able to revoke the key from underneath us!
$ sudo keyctl unlink 678913344 @s
will achieve this, and now the key is configured appropriately - UID 0 can read, modify and delete the key, other users can't.

This is part of our ongoing work at CoreOS to make rkt more secure. Moving the signing keys into the kernel is the first step towards rkt no longer having to trust the local writable filesystem[2]. Once keys have been enrolled the keyring can be locked down - rkt will then refuse to run any images unless they're signed with one of these keys, and even root will be unable to alter them.

[1] (obviously it should also be impossible to ptrace() my userspace keyring manager)
[2] Part of our Secure Boot work has been the integration of dm-verity into CoreOS. Once deployed this will mean that the /usr partition is cryptographically verified by the kernel at runtime, making it impossible for anybody to modify it underneath the kernel. / remains writable in order to permit local configuration and to act as a data store, and right now rkt stores its trusted keys there.

comments

Faqet

Subscribe to AlbLinux agreguesi - Site në gjuhë të huaj