You are here

Site në gjuhë të huaj

Christian Schaller: GStreamer Conference 2014 talks online

Planet GNOME - Mër, 22/10/2014 - 8:35md

For those of you who like me missed this years GStreamer Conference the recorded talks are now available online thanks to Ubicast. Ubicats has been a tremendous partner for GStreamer over the years making sure we have high quality talk recordings online shortly after the conference ends. So be sure to check out this years batch of great GStreamer talks.

Btw, I also done a minor release of Transmageddon today, which mostly includes a couple of bugfixes and a few less deprecated widgets

Nicholas Skaggs: Sprinting in DC: Tuesday

Planet UBUNTU - Mër, 22/10/2014 - 8:13md
This week, my team and I are sprinting with many of the core app developers and other folks inside of Ubuntu Engineering. Each day I'm attempting to give you a glimpse of what's happening.

On Tuesday I was finally able to sit down with the team and plan our week. In addition I was able to plan some of the work I had in mind with the community folks working on the core apps. Being obsessed with testing, my primary goals this week are centered around quality. Namely I want to make it easier for developers to write tests. Asking them to write tests is much easier when it's easy to do so. Fortunately, I think (hope?) all of the community core apps developers recognize the benefits to tests and thus are motivated to drive maturity into the testing story.

I'm also keen to work on the manual testing story. The community is imperative in helping test images for not only ubuntu, but also all of it's flavors. Seriously, you should say thank you to those folks helping make sure your install of ubuntu works well. They are busy this week helping make sure utopic is as good as it can be. Rock on image testers! But the tools and process used weigh on my mind, and I'm keen to chat later in the week with the canonical QA team and get there feedback.

During the day I attended sessions regarding changes and tweaks to the CI process. For core apps developers, errors in jenkins should be easier to replicate after these changes. CI will be moving to utilizing adt-run (autopkgtest) for there test execution (and you should too!). They will also provide the exact commands used to run the test. That means you can easily duplicate the results on the dashboard locally and fix the issues found. No more works on my box excuses!

I also met the team responsible for the application store and gave them feedback on the application submission process. Submitting apps is already so simple, but even more cool things are happening on this front.

The end of the evening found us shuffling into cab's for a team dinner. We had a long table of folks eating Italian food and getting to know each other better.


After dinner, I pressured a few folks into having some dessert and ordered a sorbet for myself. After receiving no less than 4 fruit sorbets due to a misunderstanding, I began carving the fruits and sending plates of sorbet down the table. My testcase failed however when the plates all came back :-(



Nicholas Skaggs: Sprinting in DC: Monday

Planet UBUNTU - Mër, 22/10/2014 - 8:01md
This week, my team and I are sprinting in Washington DC with many of the core app developers and other folks inside of Ubuntu Engineering. Sprints are always busy, but the work tends to be a mix of social and technical. I get to assign names (IRC nicknames mostly) to faces as well as get to know my co-workers and other community members better.

I thought it might be useful to give writeups each day of what's going on, at least from my perspective during the sprint. I won't yammer on too much about quality and instead bring you pictures of what you really want. And some of this too. Whoops, here's one.

Pictures of people taking pictures . . .Monday was the first day of the sprint, and also the day of my arrival! Personally I'm busy at home during this week, so it's tough to get away. That said, I can't imagine being anywhere else for the week. The sprints are a wonderful source of respite for everyone.

Monday itself consisted of making sure everything is ready for the week, planning events, and icebreakers. In typical fashion, an opening plenary set the bar for the week with notes about the progress being made on the phone as well as the future of the desktop. Lots of meetings and a few blurry jet lagged hours later, everyone was ready to sit for a bit and have some non-technical conversation!

Fortunately for us there was an event planned to meet both our social and hunger needs. After being split randomly into teams of bugs (love the play on quality), we played a bit of trivia. After each round teams were scored not only on the correct response, but also how quickly they responded. The questions varied from the obscure to fun bits about ubuntu. The final round centered around Canonical itself which was fun trip down memory lane to remember.

As I crawled into bed I still had the wonderfully cheesy announcer playing trivia questions in my head.


Petter Reinholdtsen: listadmin, the quick way to moderate mailman lists - nice free software

Planet Debian - Mër, 22/10/2014 - 8:00md

If you ever had to moderate a mailman list, like the ones on alioth.debian.org, you know the web interface is fairly slow to operate. First you visit one web page, enter the moderation password and get a new page shown with a list of all the messages to moderate and various options for each email address. This take a while for every list you moderate, and you need to do it regularly to do a good job as a list moderator. But there is a quick alternative, the listadmin program. It allow you to check lists for new messages to moderate in a fraction of a second. Here is a test run on two lists I recently took over:

% time listadmin xiph fetching data for pkg-xiph-commits@lists.alioth.debian.org ... nothing in queue fetching data for pkg-xiph-maint@lists.alioth.debian.org ... nothing in queue real 0m1.709s user 0m0.232s sys 0m0.012s %

In 1.7 seconds I had checked two mailing lists and confirmed that there are no message in the moderation queue. Every morning I currently moderate 68 mailman lists, and it normally take around two minutes. When I took over the two pkg-xiph lists above a few days ago, there were 400 emails waiting in the moderator queue. It took me less than 15 minutes to process them all using the listadmin program.

If you install the listadmin package from Debian and create a file ~/.listadmin.ini with content like this, the moderation task is a breeze:

username@example.org spamlevel 23 default discard discard_if_reason "Posting restricted to members only. Remove us from your mail list." password secret adminurl https://{domain}/mailman/admindb/{list} mailman-list@lists.example.com password hidden other-list@otherserver.example.org

There are other options to set as well. Check the manual page to learn the details.

If you are forced to moderate lists on a mailman installation where the SSL certificate is self signed or not properly signed by a generally accepted signing authority, you can set a environment variable when calling listadmin to disable SSL verification:

PERL_LWP_SSL_VERIFY_HOSTNAME=0 listadmin

If you want to moderate a subset of the lists you take care of, you can provide an argument to the listadmin script like I do in the initial screen dump (the xiph argument). Using an argument, only lists matching the argument string will be processed. This make it quick to accept messages if you notice the moderation request in your email.

Without the listadmin program, I would never be the moderator of 68 mailing lists, as I simply do not have time to spend on that if the process was any slower. The listadmin program have saved me hours of time I could spend elsewhere over the years. It truly is nice free software.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Google Announces Inbox, a New Take On Email Organization

Slashdot.org - Mër, 22/10/2014 - 8:00md
Z80xxc! writes: The Gmail team announced "Inbox" this morning, a new way to manage email. Inbox is email, but organized differently. Messages are grouped into "bundles" of similar types. "Highlights" pull out and display key information from messages, and messages can be "snoozed" to come back later as a reminder. Inbox is invite-only right now, and you can email inbox@google.com to request an invite.

Read more of this story at Slashdot.








Astronomers Find Brightest Pulsar Ever Observed

Slashdot.org - Mër, 22/10/2014 - 7:17md
An anonymous reader writes: Astronomers using the Chandra X-ray Observatory and the NuSTAR satellite have discovered a pulsar so bright that it challenges how scientists think pulsars work. While observing galaxy M82 in hopes of spotting supernovae, the researchers found an unexpected source of X-rays very close to the galaxy's core. It was near another source, thought to be a black hole. But the new one was pulsing, which black holes don't do. The trouble is that according to known pulsar models, it's about 100 times brighter than the calculated limits to its luminosity (abstract). Researchers used a different method to figure out its mass, and the gap shrank, but it's still too bright to fit their theories.

Read more of this story at Slashdot.








Carlos Soriano: Development of Nautilus – Popovers, port to GAction and more

Planet GNOME - Mër, 22/10/2014 - 6:55md

So for the last two weeks, I have been trying to implement this:

The popovers!

In an application that already use GAction and a normal GMenu for everything is quite easy.

But Nautilus is not using GAction neither GMenu for its menus. Not only that, Nautilus use GtkUIManager for managing the menus and GtkActions. And not only that, Nautilus merge parts of menus along all the code.

Also, the popover drawn in that design is not possible with GMenu because of the GtkSlider.

So my first step, when nothing was clear for me, was to just trying to create a custom GtkBox class to embed it on the popover and try to us the current architecture of nautilus.

It didn’t work, obviously. Fail 1.

Then after talking with some Gedit guys (thanks!), I understood that what I needed was to port Nautilus to GAction first. But, I will have to find a solution to merge menus.

My first week and a half was trying to find a solution on how to merge the menus, along making the port to GAction and refactoring Nautilus code to make it with sense and being used to the code of Nautilus.

The worst part was the complexity of the code, understanding it and the its intricate code paths. Making a new application test with GMenu and popovers merging menus was kinda acceptable.

To understand why I needed to merge menus recursively, this was the recursion of nautilus menus that was done with GtkUIManager along 4 levels of classes. That diagram should have more leafs (more classes injecting items) at some levels, but this was the most complex one.:

So after spending more than a week trying to make it work at all costs,  I figured out that merging menus recursively in recursive sections was not working. That was kinda frustrating.

Big fail 2.

Then I decided to get another path, with the experience earned along that one week and a half.

I simplified the menu layout, to be a flat layout (still I have to merge a one-level menus, so a new way to merge menus was born), put all the management of the actions on the window instead of having multiple GtkActionGroups sparsed on the code as Nautilus had previously, make the update of menus centralized on the window, attach the menus where it makes sense (on the toolbar), and a beautiful thing, the toolbar of nautilus (aka header bar) is now on a xml gresource file, not longer making it programatically =).

That last thing required to redo a good part of the toolbar, to for example use the private bindings that GObject provides (and then be able to use gtk_widget_class_bind_template_child_private) or sync the sensitivity of some widgets that were synced directly modifying the actions on the window instead on the toolbar, etc.

And thanks to the experience earned in the fails before, it started working!

Then I became enthusiastic to add more and more part of nautilus ported. After the prototype worked this morning, all was kinda easy. And now I feel more (like a very big difference) confident with the code of Nautilus, C, GTK+ and GObject.

Here’s the results

It’s still a very early prototype, since the port to GAction is not completed. I think I have 40% of the port done. And I didn’t erased all the code that now it’s not necesary. But with a prototype working and the difficult edges solved, that doesn’t worry me at all.

Work to be done is:

* Complete the port to GAction, porting also all menus.

* Refactor to make more sense now with the current workflow of menus and actions.

* Create the public API to allow extensions to extend the menus. Luckily I was thinking on that when creating the API to merge the menus inside Nautilus, so the method will be more or less the same.

* And last but not least, make sure any regression is known (this is kinda complicated due to the possibly code paths and supported tools of Nautilus)

Hope you like the work!

PD: Work is being done in wip/gaction but please, don’t look at the code yet =)


Raspberry Pi Founder Demos Touchscreen Display For DIY Kits

Slashdot.org - Mër, 22/10/2014 - 6:34md
An anonymous reader writes: Over 4 million Raspberry Pis have been sold so far, and now founder Eben Upton has shown off a touchscreen display panel that's designed to work with it. It's a 7" panel, roughly tablet sized, but slightly thicker. "With the incoming touchscreen panel The Pi Foundation is clearly hoping to keep stoking the creative fires that have helped drive sales of the Pi by slotting another piece of DIY hardware into the mix." Upton also discussed the Model A+ Raspberry Pi board — an updated version they'll be announcing soon.

Read more of this story at Slashdot.








Shooting At Canadian Parliament

Slashdot.org - Mër, 22/10/2014 - 5:55md
CBC reports that a man pulled up to the War Memorial in downtown Ottawa, got out of his car, and shot a soldier with a rifle. The Memorial is right next to the Canadian Parliament buildings. A shooter (reportedly the same one, but unconfirmed) also approached Parliament and got inside before he was shot and killed. "Scott Walsh, who was working on Parliament Hill, said ... the man hopped over the stone fence that surrounds Parliament Hill, with his gun forcing someone out of their car. He then drove to the front doors of Parliament and fired at least two shots, Walsh said." Canadian government officials were quickly evacuated from the building, while the search continues for further suspects. This comes a day after Canada raised its domestic terrorism threat level. Most details of the situation are still unconfirmed -- CBC has live video coverage here. They have confirmed that there was a second shooting at the Rideau Center, a shopping mall nearby.

Read more of this story at Slashdot.








Shooting At Canadian Parliament

Slashdot.org - Mër, 22/10/2014 - 5:55md
CBC reports that a man pulled up to the War Memorial in downtown Ottawa, got out of his car, and shot a soldier with a rifle. The Memorial is right next to the Canadian Parliament buildings. A shooter (reportedly the same one, but unconfirmed) also approached Parliament and got inside before he was shot and killed. "Scott Walsh, who was working on Parliament Hill, said ... the man hopped over the stone fence that surrounds Parliament Hill, with his gun forcing someone out of their car. He then drove to the front doors of Parliament and fired at least two shots, Walsh said." Canadian government officials were quickly evacuated from the building, while the search continues for further suspects. This comes a day after Canada raised its domestic terrorism threat level. Most details of the situation are still unconfirmed -- CBC has live video coverage here. They have confirmed that there was a second shooting at the Rideau Center, a shopping mall nearby.

Read more of this story at Slashdot.


What It Took For SpaceX To Become a Serious Space Company

Slashdot.org - Mër, 22/10/2014 - 5:47md
An anonymous reader writes: The Atlantic has a nice profile of SpaceX's rise to prominence — how a private startup managed to successfully compete with industry giants like Boeing in just a decade of existence. "Regardless of its inspirations, the company was forced to adopt a prosaic initial goal: Make a rocket at least 10 times cheaper than is possible today. Until it can do that, neither flowers nor people can go to Mars with any economy. With rocket technology, Musk has said, "you're really left with one key parameter against which technology improvements must be judged, and that's cost." SpaceX currently charges $61.2 million per launch. Its cost-per-kilogram of cargo to low-earth orbit, $4,653, is far less than the $14,000 to $39,000 offered by its chief American competitor, the United Launch Alliance. Other providers often charge $250 to $400 million per launch; NASA pays Russia $70 million per astronaut to hitch a ride on its three-person Soyuz spacecraft. SpaceX's costs are still nowhere near low enough to change the economics of space as Musk and his investors envision, but they have a plan to do so (of which more later)."

Read more of this story at Slashdot.








Software Glitch Caused 911 Outage For 11 Million People

Slashdot.org - Mër, 22/10/2014 - 5:05md
HughPickens.com writes: Brian Fung reports at the Washington Post that earlier this year emergency services went dark for over six hours for more than 11 million people across seven states. "The outage may have gone unnoticed by some, but for the more than 6,000 people trying to reach help, April 9 may well have been the scariest time of their lives." In a 40-page report (PDF), the FCC found that an entirely preventable software error was responsible for causing 911 service to drop. "It could have been prevented. But it was not," the FCC's report reads. "The causes of this outage highlight vulnerabilities of networks as they transition from the long-familiar methods of reaching 911 to [Internet Protocol]-supported technologies." On April 9, the software responsible for assigning the identifying code to each incoming 911 call maxed out at a pre-set limit; the counter literally stopped counting at 40 million calls. As a result, the routing system stopped accepting new calls, leading to a bottleneck and a series of cascading failures elsewhere in the 911 infrastructure. Adm. David Simpson, the FCC's chief of public safety and homeland security, says having a single backup does not provide the kind of reliability that is ideal for 911. "Miami is kind of prone to hurricanes. Had a hurricane come at the same time [as the multi-state outage], we would not have had that failover, perhaps. So I think there needs to be more [distribution of 911 capabilities]."

Read more of this story at Slashdot.








Zygmunt Krynicki: Launching a process to monitor stdout, stderr and exit code reliably

Planet UBUNTU - Mër, 22/10/2014 - 4:50md
Recently I'm fixing a rather difficult bug that deals with doing one simple task reliably. Run a program and watch (i.e. intercept and process) stdout and stderr until the process terminates.
Doing this is surprisingly difficult and I was certainly caught in a few mistakes the first time I tried to do this. I recently posted a lengthy comment on the corresponding bug. It took me a few moments to carefully analyze and re-think the situation and how a reliable approach should work. Non the less I am only human and I certainly have made my set of mistakes.
Below is the reproduction for my current approach. The implementation is still in progress but it seems to work (I need to implement the termination phase of non-kill-able processes and switch to fully non-blocking I/O). So far I've used epoll(7) and signalfd(7). I'm still planning to use timerfd_create(2) for the timer, perhaps with CLOCK_RTC for hard wall-clock-time limit enforcement. I'll post the full, complete examples once I'm done with this but you can look at how it mostly looks like today in the python-glibc git tree's demos/ directory.
I'd like to ask everyone that has experience with this part of systems engineering to poke holes in my reasoning and show how this might fail and misbehave. Thanks.
The current approach, that so far works good on all the pathological cases is to do this.The general idea is that we're in a I/O loop, using non-blocking I/O and a select-like mechanism to wait for wait for:
 - timeout (optional, new feature)
 - read side of the stdout pipe data
 - read side of the stdout pipe being closed
 - read side of the stderr pipe data
 - read side of the stderr pipe being closed
 - SIGCHLD being delivered with the intent to say that the process is deadIn general we keep looping and terminate only when the set of waited things (stdout depleted, stderr depleted, process terminated) is empty. This is not always true so see below. The action that we do on each is event is obviously different:If the timeout has elapsed we proceed to send SIGTERM, reset the timer for shutdown period, followed by SIGQUIT and another timer reset. After that we send SIGKILL. This can fail as the process may have elevated itself beyond our capabilities. This is still undecided but perhaps, at this time, we should use an elevated process manager (see below). If we fail to terminate the process special provisions apply (see below).If we have data to read we just do and process that (send to log files, process, send to .record.gz). This is a point where we can optimize the process and improve reliability in event of sudden system crash. Using more modern facilities we can implement tee in kernel space which lowers processing burden on python and, in general, makes it more likely that the log files will see actual output the process made just prior to its death.We can also use pipes in O_DIRECT (aka packet mode) here to ensure that all writes() end up as individual records, which is the indented design of the I/O log record concept. This won't address the inherent buffering that is enabled in all programs that detect when they are redirected and no longer attached to a tty.Whenever one of the pipes is depleted (which may *never* happen, lesson learned) we just close our side.When the child dies, and this is the most important part and the actual bugfix, we do the following sequence of events:
 - if we still have stdout pipe open, read at most one PIPE_BUF. We cannot read more as the pipe may live on forever and we can just hang as we currently do. Reading one PIPE_BUF ensures that we catch the last moments of what the originally started process intended to tell us. Then we close the pipe. This will likely result in SIGPIPE in any processes that are still attached to it though we have no guarantee that it will rally kill them as that signal can be blocked.
 - if we still have stderr pipe open we follow the same logic as for stdout above.
 - we restore some signal handling that was blocked during the execution of the loop and terminate.There's one more trick up our sleeve and that is PR_SET_CHILD_SUBREAPER but I'll describe that in a separate bug report that deals with runaway processes. Think dbus-launch or anything that double-forks and demonizes
If you have any comments or ideas please post them here (wherever you are reading this), on the launchpad bug report page or via email. Thanks a lot!

Tim Janik: Apache SSLCipherSuite without POODLE

Planet GNOME - Mër, 22/10/2014 - 4:35md
In my previous post Forward Secrecy Encryption for Apache, I’ve described an Apache SSLCipherSuite setup to support forward secrecy which allowed TLS 1.0 and up, avoided SSLv2 but included SSLv3. With the new PODDLE attack (Padding Oracle On Downgraded Legacy Encryption), SSLv3 (and earlier versions) should generally be avoided. Which means the cipher configurations discussed [...]

Windows 0-Day Exploited In Ongoing Attacks

Slashdot.org - Mër, 22/10/2014 - 4:23md
An anonymous reader writes: Microsoft is warning users about a new Windows zero-day vulnerability that is being actively exploited in the wild and is primarily a risk to users on servers and workstations that open documents with embedded OLE objects. The vulnerability is currently being exploited via PowerPoint files. These specially crafted files contain a malicious OLE (Object Linking and Embedding) object. This is not the first time a vulnerability in OLE has been exploited by cybercriminals, however most previous OLE vulnerabilities have been limited to specific older versions of the Windows operating system. What makes this vulnerability dangerous is that it affects the latest fully patched versions of Windows.

Read more of this story at Slashdot.








DHS Investigates 24 Potentially Lethal IoT Medical Devices

Slashdot.org - Mër, 22/10/2014 - 3:39md
An anonymous reader writes: In the wake of the U.S. Food and Drug Administration's recent recommendations to strengthen security on net-connected medical devices, the Department of Homeland Security is launching an investigation into 24 cases of potential cybersecurity vulnerabilities in hospital equipment and personal medical devices. Independent security researcher Billy Rios submitted proof-of-concept evidence to the FDA indicating that it would be possible for a hacker to force infusion pumps to fatally overdose a patient. Though the complete range of devices under investigation has not been disclosed, it is reported that one of them is an "implantable heart device." William Maisel, chief scientist at the FDA's Center for Devices and Radiological Health, said, "The conventional wisdom in the past was that products only had to be protected from unintentional threats. Now they also have to be protected from intentional threats too."

Read more of this story at Slashdot.








Hungary To Tax Internet Traffic

Slashdot.org - Mër, 22/10/2014 - 2:57md
An anonymous reader writes: The Hungarian government has announced a new tax on internet traffic: 150 HUF ($0.62 USD) per gigabyte. In Hungary, a monthly internet subscription costs around 4,000-10,000 HUF ($17-$41), so it could really put a constraint on different service providers, especially for streaming media. This kind of tax could set back the country's technological development by some 20 years — to the pre-internet age. As a side note, the Hungarian government's budget is running at a serious deficit. The internet tax is officially expected to bring in about 20 billion HUF in income, though a quick look at the BIX (Budapest Internet Exchange) and a bit of math suggests a better estimate of the income would probably be an order of magnitude higher.

Read more of this story at Slashdot.








Ubuntu LoCo Council: Regular LoCo Council Meeting for 21 October 2014

Planet UBUNTU - Mër, 22/10/2014 - 2:45md

Meeting information

#ubuntu-meeting: Regular LoCo Council Meeting for October 2014, 21 Oct at 20:00 — 21:33 UTC
Full logs at http://ubottu.com/meetingology/logs/ubuntu-meeting/2014/ubuntu-meeting.2014-10-21-20.00.log.html
Meeting summary

Opening Business

The discussion about “Opening Business” started at 20:00.

Listing of Sitting Members of LoCo Council (20:00)
For the avoidance of uncertainty and doubt, it is necessary to list the members of the council who are presently serving active terms.
Marcos Costales, term expiring 2015-04-16
Jose Antonio Rey, term expiring 2015-10-04
Pablo Rubianes, term expiring 2015-04-16
Sergio Meneses, term expiring 2015-10-04
Stephen Michael Kellat, term expiring 2015-10-04
There is currently one vacant seat on LoCo Council
Roll Call (20:00)
Vote: LoCo Council Roll Call (All Members Present To Vote In Favor To Register Attendance) (Carried)
Re-Verification: France

The discussion about “Re-Verification: France” started at 20:03.

Vote: That the re-verification application of France be approved and that the period of verification be extended for a period of two years from this date. (Carried)
Update on open cases before the LoCo Council

The discussion about “Update on open cases before the LoCo Council” started at 20:19.

LoCo Council presently has before it pending verification and re-verification proceedings for the following LoCo Teams: Mauritius, Finland, Netherlands, Peru, Russia, Serbia.
The loco-contacts thread “Our teams reject the new LoCo Council policy”

The discussion about “The loco-contacts thread ‘Our teams reject the new LoCo Council policy’” started at 20:20.

Requests from the Galician and Asturian teams

The discussion about “Requests from the Galician and Asturian teams” started at 20:59.

Vote: That the Galician Team, pursuant to their request this day, be considered an independent LoCo team notwithstanding representing less than a country. (Carried)
Vote: That the Asturian Team, pursuant to their request this day, be considered an independent LoCo Team notwithstanding representing less than a country. (Carried)
Marcos Costales, in his capacity as leader of Ubuntu Spain and as a member of LoCo Council, stood aside from both votes.
Any Other Business

The discussion about “Any Other Business” started at 21:13.

Those who have requests of the LoCo Council are advised to write to it at loco-council@lists.ubuntu.com for assistance.
Vote results

LoCo Council Roll Call (All Members Present To Vote In Favor To Register Attendance)

Motion carried (For/Against/Abstained 4/0/0)
Voters PabloRubianes, skellat, costales, SergioMeneses
That the re-verification application of France be approved and that the period of verification be extended for a period of two years from this date.

Motion carried (For/Against/Abstained 4/0/0)
Voters PabloRubianes, skellat, costales, SergioMeneses
That the Galician Team, pursuant to their request this day, be considered an independent LoCo team notwithstanding representing less than a country.

Motion carried (For/Against/Abstained 2/0/1)
Voters PabloRubianes, skellat, SergioMeneses
That the Asturian Team, pursuant to their request this day, be considered an independent LoCo Team notwithstanding representing less than a country.

Motion carried (For/Against/Abstained 2/0/1)
Voters PabloRubianes, skellat, SergioMeneses

Xerox Alto Source Code Released To Public

Slashdot.org - Mër, 22/10/2014 - 2:15md
zonker writes: In 1970, the Xerox Corporation established the Palo Alto Research Center (PARC) with the goal to develop an "architecture of information" and lay the groundwork for future electronic office products. The pioneering Alto project that began in 1972 invented or refined many of the fundamental hardware and software ideas upon which our modern devices are based, including raster displays, mouse pointing devices, direct-manipulation user interfaces, windows and menus, the first WYSIWYG word processor, and Ethernet. The first Altos were built as research prototypes. By the fall of 1976 PARC's research was far enough along that a Xerox product group started to design products based on their prototypes. Ultimately, ~1,500 were built and deployed throughout the Xerox Corporation, as well as at universities and other sites. The Alto was never sold as a product but its legacy served as inspiration for the future. With the permission of the Palo Alto Research Center, the Computer History Museum is pleased to make available, for non-commercial use only, snapshots of Alto source code, executables, documentation, font files, and other files from 1975 to 1987. The files are organized by the original server on which they resided at PARC that correspond to files that were restored from archive tapes. An interesting look at retro-future.

Read more of this story at Slashdot.








Konstantinos Margaritis: Eigen NEON port extended to ARMv8!

Planet Debian - Mër, 22/10/2014 - 12:44md

Soon after the VSX port, and as promised I have completed the ARMv8 NEON (a.k.a. Advanced SIMD) port. Basically this extends support to 64-bit doubles and also provides faster alternatives to division as ARMv8 has builtin instructions for division both for 32-bit floats and 64-bit doubles. Preliminary benchmarks (bench_gemm):

Faqet

Subscribe to AlbLinux agreguesi - Site në gjuhë të huaj