You are here

Site në gjuhë të huaj

Mozilla Checks If Firefox Is Affected By Same Malware Vulnerability As Tor

Slashdot.org - Pre, 16/09/2016 - 10:40md
Mozilla is investigating whether the fully patched version of Firefox is affected by the same cross-platform, malicious code-execution vulnerability patched on Friday in the Tor browser. Dan Goodin, reporting for ArsTechnica: The vulnerability allows an attacker who has a man-in-the-middle position and is able to obtain a forged certificate to impersonate Mozilla servers, Tor officials warned in an advisory. From there, the attacker could deliver a malicious update for NoScript or any other Firefox extension installed on a targeted computer. The fraudulent certificate would have to be issued by any one of several hundred Firefox-trusted certificate authorities (CA). While it probably would be challenging to hack a CA or trick one into issuing the necessary certificate for addons.mozilla.org, such a capability is well within reach of nation-sponsored attackers, who are precisely the sort of adversaries included in the Tor threat model. In 2011, for instance, hackers tied to Iran compromised Dutch CA DigiNotar and minted counterfeit certificates for more than 200 addresses, including Gmail and the Mozilla addons subdomain.

Read more of this story at Slashdot.

Web Security CEO Warns About Control Of Internet Falling Into Few Hands

Slashdot.org - Pre, 16/09/2016 - 10:00md
The idea behind the internet was to make a massive, decentralized system that wasn't under control of anyone, but that is increasingly changing, according to Matthew Prince, CEO of web security company CloudFlare. His statements come at a time when Google and Facebook and other companies are increasingly building new products and services and locking in users to their respective walled gardens. From a CNBC report: "More and more of the internet is sitting behind fewer and fewer players, and there are benefits of that, but there are also real risks," said Matthew Prince, chief executive officer of web security company CloudFlare, in an interview with CNBC. His comments came at CloudFlare's Internet Summit -- a conference featuring tech executives and government security experts -- on Tuesday in San Francisco. "If everything sits behind Facebook and you can't publish pictures like that, is the world a better place? Probably not," said Prince. "Before you know it, you could wake up and find more of the internet sits behind a small number of gate-keepers," said Prince. Putting that sort of power in the hands of a small number of people and companies "might not be the best thing," he said. Still, the wave of consolidation among the major internet companies is likely to continue, at least for now, he said.

Read more of this story at Slashdot.

Half Of US Smartphone Users Download Zero Apps Per Month

Slashdot.org - Pre, 16/09/2016 - 9:20md
Apple's iOS users may have downloaded more than 140 billion apps since the App Store was launched in 2008, but the reality is that a huge number of people just don't try out so many apps anymore. We noted a few weeks ago how people were showing less interest towards apps, and now we have more confirmation on that front. According to comScore, some 49 percent of U.S. smartphone users download zero apps in a typical month. Recode reports: Of the 51 percent of smartphone owners who do download apps during the course of a month, "the average number downloaded per person is 3.5," comScore's report says. "However, the total number of app downloads is highly concentrated at the top, with 13 percent of smartphone owners accounting for more than half of all download activity in a given month."

Read more of this story at Slashdot.

Windows 10 Haters: Try Linux On Kaby Lake Chips With Dell's New XPS 13

Slashdot.org - Pre, 16/09/2016 - 8:40md
Attention Linux enthusiasts. Your OS of your choice can finally work on laptops with Intel's Kaby Lake chips. Dell is releasing three new models of slick XPS 13 Developer Edition that will be available with Ubuntu OS and 7th Generation Core processors in the U.S. and Canada starting on Oct. 10, reports PCWorld. From the article:Prices for XPS 13 DE will start at $949. Dell also announced the XPS 13 model with Kaby Lake and Windows 10, which will ship on Oct. 4 starting at $799. Dell didn't share details on what version of Ubuntu desktop OS will be preloaded. It officially supports Ubuntu 14.04 in existing laptops, but could pre-load version 16.04 on the new XPS 13 DE. Dell has remained committed to Linux while major PC vendors shift to Windows 10 on PCs. Intel made a major commitment to supporting Windows 10 with its new Kaby Lake chips but hasn't talked much about Linux support. XPS 13 DE is perhaps the sexiest and thinnest Linux laptop available, with an edge-to-edge screen being a stand-out feature. It is the latest in Dell's Project Sputnik line of laptops, and it is targeted at computer enthusiasts who want a Windows or Mac alternative. A knock against Linux is that the OS has lagged behind Windows on driver development and on supporting the latest technologies like USB-C ports, 4K screens, and Thunderbolt. Project Sputnik started four years ago as an effort between Dell and the open-source community to bridge that gap, and since then, the resulting laptops have achieved cult status among Linux enthusiasts. A Dell XPS 13 with a Core i5 chip will have a full HD screen, 8GB of RAM, and a 128GB SSD. Another configuration will have a 3200 x 1800-pixel screen, Core i5, and a 256GB SSD. A fully loaded model will have a Core i7 chip, a 512GB SSD, 16GB of RAM, and a 3200 x 1800-pixel screen.

Read more of this story at Slashdot.

Woman Faces $9,100 Verizon Bill For Data She Says She Didn't Use

Slashdot.org - Pre, 16/09/2016 - 8:00md
A Verizon Wireless customer says she received a bill of $9,100 for hundreds of gigabytes of data usage which never consumed. The woman told the Cleveland Plain Dealer she was on Verizon's 4GB shared data plan, and like any normal person, the bill of $8,535 from Verizon for consuming 569GB of data in a matter of few days doesn't compute well with her. The problem, as DSLR reports, is that when she tried to find out what caused the data usage, Verizon website told her "the activity you are trying to perform is currently unavailable. Please try again later." She couldn't and switched to T-Mobile, after which Verizon charged her a penalty of $600.

Read more of this story at Slashdot.

AP, Vice, USA Today Sue FBI For Info On Phone Hack of San Bernardino Shooter

Slashdot.org - Pre, 16/09/2016 - 7:20md
Three news organizations filed a lawsuit Friday seeking information about how the FBI was able to break into the locked iPhone of one of the gunmen in the December terrorist attack in San Bernardino. From a USA Today report: The Justice Department spent more than a month this year in a legal battle with Apple over it could force the tech giant to help agents bypass a security feature on Syed Rizwan Farook's iPhone. The dispute roiled the tech industry and prompted a fierce debate about the extent of the government's power to pry into digital communications. It ended when the FBI said an "outside party" had cracked the phone without Apple's help. The news organizations' lawsuit seeks information about the source of the security exploit agents used to unlock the phone, and how much the government paid for it. It was filed in federal court in Washington by USA TODAY's parent company, Gannett, the Associated Press and Vice Media. The FBI refused to provide that information to the organizations under the Freedom of Information Act. The lawsuit charges that "there is no lawful basis" for the FBI to keep the records secret.

Read more of this story at Slashdot.

Right To Be Forgotten? Web Privacy Debate in Italy After Women's Suicide

Slashdot.org - Pre, 16/09/2016 - 6:40md
The suicide of a woman who battled for months to have a video of her having sex removed from the internet is fuelling debate in Italy on the "right to be forgotten" online. The 31-year-old, identified as Tiziana, was found hanged at her aunt's home in Mugnano, close to Naples in the country's south on Tuesday, reports Agence France-Presse. From the report: Her death came a year after she sent a video of herself having sex to some friends, including her ex-boyfriend, to make him jealous. The video and her name soon found their way to the web and went viral, fuelling mockery of the woman online. The footage has been viewed by almost a million internet users. In a bid to escape the humiliation, Tiziana quit her job, moved to Tuscany and tried to change her name, but her nightmare went on. The words "You're filming? Bravo," spoken by the woman to her lover in the video, have become a derisive joke online, and the phrase has been printed on T-shirts, smartphone cases and other items. After a long court battle, Tiziana recently won a "right to be forgotten" ruling ordering the video to be removed from various sites and search engines, including Facebook.

Read more of this story at Slashdot.

Autonomous Vehicles Won't Give Us Any More Free Time, Says Study

Slashdot.org - Pre, 16/09/2016 - 6:00md
An anonymous reader writes: People hoping that the driverless cars of the future will give them more free time while travelling may be in for a disappointment. Increased productivity is one of the expected benefits of self-driving cars, but a new study claims that they will have little impact. The study showed that nearly 36 percent of Americans say they would be so apprehensive using a driverless vehicle that they would only watch the road. Meanwhile, UK drivers were even more cautious at 44 per cent. "Currently, in the US, the average occupant of a light-duty vehicle spends about an hour a day traveling -- time that could potentially be put to more productive use," said Michael Sivak, research professor at the University of Michigan Transportation Research Institute. "Indeed, increased productivity is one of the expected benefits of self-driving vehicles."

Read more of this story at Slashdot.

Russia Bans Pornhub, YouPorn - Tells Citizens To Meet Someone In Real Life

Slashdot.org - Pre, 16/09/2016 - 5:22md
Russia has blocked two of the biggest porn websites. The Russian state watchdog Roskomnadzor, which oversees internet in the country, announced that it was blockeing Pornhub and YouPorn. When a Russian citizen asked on Twitter about an alternative, it replied; "as an alternative you can meet someone in real life,". From a report: The regulator dropped the banhammer on Tuesday, applying rules which had previously been imposed by two separate regional courts. Any Russian citizen visiting PornHub or YouPorn is now redirected to a simple message telling them that the sites have been blocked "by decision of public authorities." Sexually explicit material isn't illegal in the country, but according to the BBC's Vitaliy Shevchenko, the law confusingly appears to ban "the illegal production, dissemination, and advertisement of pornographic materials and objects."

Read more of this story at Slashdot.

Christian Hergert: Builder Nightly Flatpak

Planet GNOME - Pre, 16/09/2016 - 10:23pd

First off, I’ll be in Portland at the first ever LAS giving demos and starting on Builder features for 3.24.

For a while now, you’ve been able to get Builder from the gnome-apps-nightly Flatpak repository. Until now, it had a few things that made it difficult to use. We care a whole lot about making our tooling available via Flatpak because it is going to allow us to get new code into users hands quicker, safer, and more stable.

So over the last couple of weeks I’ve dug in and really started polishing things up. A few patches in Flatpak, a few patches in Builder, and a few patches in Sysprof start getting us towards something refreshing.

Python Jedi should be working now. So you can autocomplete in Python (including GObject Introspection) to your hearts delight.

Jhbuild from within the Flatpak works quite well now. So if you have a jhbuild environment on your host, you can use the Flatpak nightly and still target your jhbuild setup.

One of the tricks to being a module maintainer is getting other people to do your work. Thankfully the magnificent Patrick Griffis came to the rescue and got polkit building inside of Flatpak. Combined with some additional Sysprof patches, we have a profiler that can run from Flatpak.

Another pain point was that the terminal was inside of a pid/mount/network namespace different than that of the host. This meant that /usr/lib was actually from the Flatpak runtime, not your system /usr/lib. This has been fixed using one of the new developer features in Flatpak.

Flatpak now supports executing programs on the host (a sandbox breakout) for applications that are marked as developer tools. For those of you building your own Flatpaks, this requires --allow=devel when running the flatpak build-finish command. Of course, one could expect UI/UX flows to make this known to the user so that it doesn’t get abused for nefarious purposes.

Now that we have access to execute a command on the host using the HostCommand method of the org.freedesktop.Flatpak.Development interface, we can piggy back to execute our shell.

The typical pty dance, performed in our program.

/* error handling excluded */ int master_fd = vte_pty_get_fd (pty); grantpt (master_fd); unlockpt (master_fd); char *name = ptsname (master_fd); int pty_fd = open (name, O_RDWR, 0);

Then when executing the HostCommand method we simply pass pty_fd as a mapping to stdin (0), stdout (1), and stderr (2).

On the Flatpak side, it will check if any of these file-descriptors are a tty (with the convenient isatty() function. If so, it performs the necessary ioctl() to make our spawned process the controller of the pty. Lovely!

So now we have a terminal, whose process is running on the host, using a pty from inside our Flatpak.

Joaquim Rocha: Endless and LAS GNOME

Planet GNOME - Pre, 16/09/2016 - 8:23pd

I’ve been spending the week in San Francisco where I’ve been going every day to the awesome Endless‘ office in SoMa.
It’s been really great to talk in person to all the people I usually have to ping on the internetz and experience a bit of the office life in San Francisco.

Next Monday I am speaking at the Libre Application Summit GNOME in Portland about how we’re managing and delivering the applications to our Endless OS’s users. I am also very curious to check out the city of Portland as everybody tells me good things about it.
If you’re attending the event, come say hi!

Luis Villa: Copyleft and data: databases as poor subject

Planet GNOME - Mër, 14/09/2016 - 3:05md

tl;dr: Open licensing works when you strike a healthy balance between obligations and reuse. Data, and how it is used, is different from software in ways that change that balance, making reasonable compromises in software (like attribution) suddenly become insanely difficult barriers.

In my last post, I wrote about how database law is a poor platform to build a global public copyleft license on top of. Of course, whether you can have copyleft in data only matters if copyleft in data is a good idea. When we compare software (where copyleft has worked reasonably well) to databases, we’ll see that databases are different in ways that make even “minor” obligations like attribution much more onerous.

Card Puncher from the 1920 US Census.
How works are combined

In software copyleft, the most common scenarios to evaluate are merging two large programs, or copying one small file into a much larger program. In this scenario, understanding how licenses work together is fairly straightforward: you have two licenses. If they can work together, great; if they can’t, then you don’t go forward, or, if it matters enough, you change the license on your own work to make it work.

In contrast, data is often combined in three ways that are significantly different than software:

  • Scale: Instead of a handful of projects, data is often combined from hundreds of sources, so doing a license conflicts analysis if any of those sources have conflicting obligations (like copyleft) is impractical. Peter Desmet did a great job of analyzing this in the context of an international bio-science dataset, which has 11,000+ data sources.
  • Boundaries: There are some cases where hundreds of pieces of software are combined (like operating systems and modern web services) but they have “natural” places to draw a boundary around the scope of the copyleft. Examples of this include the kernel-userspace boundary (useful when dealing with the GPL and Linux kernel), APIs (useful when dealing with the LGPL), or software-as-a-service (where no software is “distributed” in the classic sense at all). As a result, no one has to do much analysis of how those pieces fit together. In contrast, no natural “lines” have emerged around databases, so either you have copyleft that eats the entire combined dataset, or you have no copyleft. ODbL attempts to manage this with the concept of “independent” databases and produced works, but after this recent case I’m not sure even those tenuous attempts hold as a legal matter anymore.
  • Authorship: When you combine a handful of pieces of software, most of the time you also control the licensing of at least one of those pieces of software, and you can adjust the licensing of that piece as needed. (Widely-used exceptions to this rule, like OpenSSL, tend to be rare.) In other words, if you’re writing a Linux kernel driver, or a WordPress theme, you can choose the license to make sure it complies. Not necessarily the case in data combinations: if you’re making use of large public data sets, you’re often combining many other data sources where you aren’t the author. So if some of them have conflicting license obligations, you’re stuck.
How attribution is managed

Attribution in large software projects is painful enough that lawyers have written a lot on it, and open-source operating systems vendors have built somewhat elaborate systems to manage it. This isn’t just a problem for copyleft: it is also a problem for the supposedly easy case of attribution-only licenses.

Now, again, instead of dozens of authors, often employed by the same copyright-owner, imagine hundreds or thousands. And imagine that instead of combining these pieces in basically the same way each time you build the software, imagine that every time you have a different query, you have to provide different attribution data (because the relevant slices of data may have different sources or authors). That’s data!

The least-bad “solution” here is to (1) tag every field (not just data source) with licensing information, and (2) have data-reading software create new, accurate attribution information every time a new view into the data is created. (I actually know of at least one company that does this internally!) This is not impossible, but it is a big burden on data software developers, who must now include a lawyer in their product design team. Most of them will just go ahead and violate the licenses instead, pass the burden on to their users to figure out what the heck is going on, or both.

Who creates data

Most software is either under a very standard and well-understood open source license, or is produced by a single entity (or often even a single person!) that retains copyright and can adjust that license based on their needs. So if you find a piece of software that you’d like to use, you can either (1) just read their standard FOSS license, or (2) call them up and ask them to change it. (They might not change it, but at least they can if they want to.) This helps make copyleft problems manageable: if you find a true incompatibility, you can often ask the source of the problem to fix it, or fix it yourself (by changing the license on your software).

Data sources typically can’t solve problems by relicensing, because many of the most important data sources are not authored by a single company or single author. In particular:

  • Governments: Lots of data is produced by governments, where licensing changes can literally require an act of the legislature. So if you do anything that goes against their license, or two different governments release data under conflicting licenses, you can’t just call up their lawyers and ask for a change.
  • Community collaborations: The biggest open software relicensing that’s ever been done (Mozilla) required getting permission from a few thousand people. Successful online collaboration projects can have 1-2 orders of magnitude more contributors than that, making relicensing is hard. Wikidata solved this the right way: by going with CC0.
What is the bottom line?

Copyleft (and, to a lesser extent, attribution licenses) works when the obligations placed on a user are in balance with the benefits those users receive. If they aren’t in balance, the materials don’t get used. Ultimately, if the data does not get used, our egos feel good (we released this!) but no one benefits, and regardless of the license, no one gets attributed and no new material is released. Unfortunately, even minor requirements like attribution can throw the balance out of whack. So if we genuinely want to benefit the world with our data, we probably need to let it go.

So what to do?

So if data is legally hard to build a license for, and the nature of data makes copyleft (or even attribution!) hard, what to do? I’ll go into that in my next post.

Og Maciel: Podcasts I've Been Listening To Lately

Planet GNOME - Mar, 13/09/2016 - 6:00pd

For someone who has run his own podcast for several years (albeit not generating a lot of content lately), it took me quite some time to actually start listening to podcasts myself. Ironic, I know, but I guess the main reason behind this was because I was always reading code at work and eventually, no matter how hard I tried, I just couldn't pay attention to what was being said! No matter how interesting the topic being discussed was or how engaging the hosts (or hosts) were, my brain would be so focused on reading code that everything else just turned into white noise.

Well, fast forward a couple of years and I still am reading code (though not as much as I used to due to a new role), and I still have a hard time listening to podcast while at work... so I decided to only listen to them when I was not working. Simple, right? But it took me a while to change that for some reason.

Anyhow, I now listen to podcasts while driving (which I don't really do a lot of since I work from home 99.99% of the time) or when I go for walks, and after a while I have started following a handful of them which are now part of my weekly routine:

  • All The Books which provide me with an up to date list of suggestions for what books to read next. They're pretty regular with their episodes, so I can always count on listening about new books pretty much every week.
  • Book Riot for another dose of more news about books!
  • Hack the Entrepreneur to keep up with people who are making something about what they are passionate about.
  • Wonderland Podcast which I only started listening to a few weeks back but it has turned into one of my favorite.
  • Science Vs another new addition to my list, with entertaining takes at interesting topics such as 'the G-spot', 'Fracking', 'Gun Control' and 'Organic Food'.

Today I was introduced to Invisibilia and though I only listened to the first 10 minutes (I was giving the link during working hours, so no go for me), I'm already very interested and will follow it.

I do have other podcasts that I am still subscribed to, but these listed here are the ones that I am still following every episode. Maybe if I had to drive to work every day or went for walks more often, maybe then I would listen to more podcasts? Trust me though, I rather continue listening to only a small set of them than drive to work every day. Don't get me wrong, I love going to work, but that's 2 hours/day of my life that I rather spend at home :)

Marcus Lundblad: Maps marching towards 3.22

Planet GNOME - Hën, 12/09/2016 - 9:56md
Long time since the my last blog post, but here we go… 

So, I just rolled the 3.21.92 release of GNOME Maps. This is final beta release before the next stable (3.22.0).

The most noteworthy change will ofcourse be the new tile provider, replacing the discontinued MapQuest tiles, courtesy of Mapbox!
We have also backported this to prior stable versions to keep things working in current distribution releases, and for the future we will also have the ability to swich tile sources without patching release versions, as Maps now fetches a service definition file. And maybe (if time and effort permits) we might expand into the territory of client-side rendering of vector data, which opens up some possibilties, such as rendering various layers of interesting stuff such as a specific type of point-of-interests, like "show all restaurants in this area".

Another nice feature, thanks to Marius Stanciu's work in libchamlain, is that we can now render the map continously around the globe (at longitude 180°), thus we're no longer pretending there's an edge of the world, but rather aknowledge what Eratosthenes predicted around 200 BC :-)

Unfortunatly we will not see support for public transit routing for 3.22…
We still need somewhere and something to run our OpenTripPlanner instance on, and this summer getting basic tile service back on has ofcourse been prio one.

But stay tuned, and I will cook up a little status update of the stuff me and Andreas has been up to in this department too…

Jussi Pakkanen: The unspeakable horror of Visual Studio PDB files

Planet GNOME - Hën, 12/09/2016 - 7:42md
Disclaimer: everything said in the following blog post may be completely wrong. Since PDB files are (until very recently) completely undocumented everything below has been determined by trial and error. Doubt everything, believe nothing.
Debugging information, how hard can it be?When compiling C-like languages, debug information is not a problem. It gets written in the object file along with code and when objects are linked to form an executable or shared library, each individual file's debug info is combined and written in the result file. If necessary, this information can then be stripped into a standalone file.
This seems like the simplest thing in the world. Which it is. Unless you are Microsoft.
Visual Studio stores debug information in standalone files with the extension .pdb. Whenever the compiler is invoked it writes an .obj file and then opens the pdb file to write the debug info in it (while removing the old one). At first glance this does not seem a bad design, and indeed it wasn't in the 80s when it was created (presumably).
Sadly multicore machines break this model completely. Individual compilation jobs should be executable in parallel (and in Unix they are) but with pdb files they can't be. Every compile process needs to obtain some sort of lock to be able to update the pdb file. Processes that could be perfectly isolated now spend a lot of time fighting with each other over access to the pdb file.
Fortunately you can tell VS to write each object file's pdb info in a standalone file which is then merged into the final pdb. It even works, unless you want to use precompiled headers.
VS writes a string inside the obj file a string pointing to the corresponding pdb file. However if you use precompiled headers it fails because the pdb strings are different in the source object and precompiled header object file and VS wants to join these two when generating the main .obj file. VS will then note that the strings are different and refuse to create the source object file because merging two files with serialised arrays is a known unsolvable problem in computer science. The merge would work if you compiled the precompiled header separately for each object file and gave them the same temporary pdb file name. In theory at least, this particular rat hole is not one I wish to get into so I did not try.
This does work if you give both compilations the same target pdb file (the one for the final target). This gives you the choice between a slowdown caused by locking or a slowdown caused by lack of precompiled headers. Or, if you prefer, the choice of not having debug information.
But then it gets worse.
If you choose the locking slowdown then you can't take object files from one target and use them in other targets. The usual reasons are either to get just one object file for a unit test without needing a recompilation or emulating -Wl,--whole-archive (available natively only in VS2015 or later) by putting all object files in a different target. Trying gets you a linking error due to an incorrect pdb file name.
There was a blog post recently that Microsoft is having problems in their daily builds because compiling Windows is starting to take over 24 hours. I'm fairly certain this is one of the main reasons why.
But then it gets worse.
Debug information for foo.exe is written to a file called foo.pdb. The debug information for a shared library foo.dll is also written to a file called foo.pdb.  That means you can't have a dll and an exe with the same name in the same directory. Unfortunately this is what you almost always want because Windows does not have rpath so you can't instruct an executable to look up its dependencies elsewhere (Though you can fake it with PATH. Yes, really.)
Fortunately you can specify the name of the output pdb, so you can tell VS to generate foo_exe.pdb and foo_lib.pdb. Unfortunately VS will also generate a dozen other files besides the pdb and whose names come from the target basename and which you can not change. No matter what you do, files will be clobbered. Even if they did not clobber the files would still be useless because VS the IDE assumes the file is called foo.pdb and refuses to work if it is not.
All this because writing debug information inside object files, which is where it belongs, is not supported.
But wait, it gets even worse.
Putting debug info in obj files was supported but is now deprecated.In conclusionMeson recently landed support for generating PDB files. Thanks to Nirbheek for getting this ball rolling. If you know that the above is complete bull and that it is possible to do all the things discussed above then please file bugs with information or, preferably, patches to fix them.

Eitan Isaacson: I Built A Smart Clock

Planet GNOME - Hën, 12/09/2016 - 6:00md
Problem Statement:
In today’s fast-paced world, while juggling work and personal life, sometimes we need to know what time it is.
Solution:
A chronometer you can hang on the wall.

Wake up people. Clocks are the future. Embrace progress.

Hello, my name is Eitan. I am a clock maker.

Over the past year I have spent nights and weekends designing gear trains, circuit boards, soldering and writing software. The result is a clock. Not just any clock, it is a smart clock that could tell you what time it is on demand. It is internet-connected so you can remotely monitor what time it is in your home.

It is powered by three stepper motors, 3 hall effect sensors, and a miniature computer. It also ticks.

The future of time telling.

Why? Because in my hubris I thought it would be an easy weekend project, and then things got out of hand.

Gears

My first gear boxes were pretty elaborate, with different gear ratios for each motor/hand. I probably spent the most time in the design/print cycle trying to come up with a reliable solution that would transfer the torque from 3 separate motors to a single axis. I ended up with something much simpler, a 2:1 gear ratio for all hands.

An example of an early super elaborate and huge gear box. My final design. Limit Switches

Another challenge I struggled with was how would the software know where each hand is at any given time? A stepper motor is nice because it has some predictability. Each one of its steps is equal, so if you know that a step size is 6 degrees, it will take 60 steps to complete a rotation. In our case, this isn’t good enough, because:

  1. The motor is not 100% guaranteed to complete each step. If there is too much resistance on it, it will fail. I struggled to design a perfect gear box that won’t ever make things hard for the motors, but there will be a bad tooth every once in a while that will jam the motor for a step or two.
  2. The motor I chose for this project is the 28BYJ-48, mainly because it is cheap and you can get a pack of 5 from amazon with drivers for only $12. Its big drawback is that there is no consensus on what the precise step size is. Internally the motor has 32 steps per revolution (11.25 degrees per step). But it has a set of reduction gears embedded in it that make the steps per revolution something like 2037.886. Not a nice number and, more importantly, not good for clocks that risk drifting out of precision.
  3. When the clock is first turned on, it has no way to know the initial position of each hand.

I decided to solve all this with limit switches. If the clock hands somehow closed a circuit at the top of each rotation, we would at least know where “12 o’clock” is and we will have something to work with. I thought about using buttons, metal contacts and the likes. But I didn’t like the idea of introducing more wear on a delicate mechanical system: how many times will the second hand brush past a contact before screwing it up?

So, I went with latching hall effect sensors. The basic concept is magnets. The sensor will open or close a circuit depending on what pole of a magnet is nearby. I glued tiny magnets at opposite ends of the gears, and by checking the state of the a sensor circuit after every motor step you can tell if we just got to 6 o’clock or 12 o’clock.

Two magnets on opposite sides of the gear, and the hall effect sensor clamped just above the gear to detect polarity changes. Circuits

Electricity is magic to me, I really don’t understand it. I learned just enough to get this project going. I started by reading Adafruit tutorials and breadboarding. After assembling a forest of wires, I finally got a “working” clock. I wanted the final clock to be more elegant than this:

With some graph paper, I sketched out how I can arrange all of the circuits on a perf board and went to work soldering, after inhaling a lot of toxic fumes and leaving some of my skin on the iron I got this:

I was happy with the result, but it could be prettier. Plus, I would never put myself through that again. I wanted to make this process as easy as possible. So I decided to splurge a bit, I redesigned the board using Fritzing:

…and ordered PCB prints from their online service. After a month or so I got these in the mail:

Custom PCBs printed in Berlin!

Soldering on components was a breeze…

PCB with jumper headers and resistors. Software

Software is my comfort zone, you don’t get burnt, electrocuted, or spend a whole day 3D printing just to find out your design is shit. My plan was to compensate for all the hardware imperfection in software. Have it be self-tuning, smart and terrific.

I chose to have NodeJS drive the clock. Mostly because I have recently got comfortable with it, but also because it is easy to give this project a slick web interface.

Doing actual GPIO calls to move the motors didn’t work well in JS. Python wasn’t cutting it either. I needed to move 3 motors simultaneously at intervals below 20 milliseconds. The CPU halts to a stop. So I ended writing a small C program that actually does all the GPIO bits. You start it with an RPM as an argument, and it will do the rest. It doesn’t make sense to spawn a new process on each clock tick, so instead I have the JS code send a signal to the process each second.

With the help of the Network Time Protocol and some fancy high school algebra I was able to make the clock precise to the second. Just choose a timezone, and it will do the rest. It should even switch back and fourth from daylight savings time (haven’t actually tested that, waiting for DST to naturally end).

Mounting

I went to TAP Plastics, my go-to retailer for all things plastic. I bought a 12″ diameter 3/8″ thick acrylic disc. Drilled some holes, and got to mounting this whole caboodle. This project is starting to take shape! With the help of a few colorful wires for extra flourish:

All the components mounted on the acrylic disc. In Conclusion

The slick marketing from Apple and Samsung will have you believe that your life isn’t complete without their latest smart watch. They are wrong.

Your life isn’t complete because you haven’t built your own smart clock. Prime your soldering iron and get to work!


Niels Thykier: debhelper 10 is now available

Planet Debian - Dje, 11/09/2016 - 8:06md

Today, debhelper 10 was uploaded to unstable and is coming to a mirror near you “really soon now”. The actual changes between version “9.20160814” and version “10” are rather modest. However, it does mark the completion of debhelper compat 10, which has been under way since early 2012.

Some highlights from compat 10 include:

  • The dh sequence in compat 10 automatically regenerate autotools files via dh_autoreconf
  • The dh sequence in compat 10 includes the dh-systemd debhelper utilities
  • dh sequencer based packages now defaults to building in parallel (i.e. “–parallel” is default in compat 10)
  • dh_installdeb now properly shell-escapes maintscript arguments.

For the full list of changes in compat 10, please review the contents of the debhelper(7) manpage. Beyond that, you may also want to upgrade your lintian to 2.5.47 as it is the first version that knows that compat 10 is stable.

 


Filed under: Debhelper, Debian

Hideki Yamane: mirror disk usage: not so much as expected

Planet Debian - Dje, 11/09/2016 - 11:02pd
Debian repository mirror server disk usage.

I guess many new packages are added to repo, but disk usage is not so much. Why?

Dirk Eddelbuettel: New package gettz on CRAN

Planet Debian - Sht, 10/09/2016 - 11:40md

gettz is now on CRAN in its initial release 0.0.1.

It provides a possible fallback in situations where Sys.timezone() fails to determine the system timezone. That can happen when e.g. the file /etc/localtime somehow is not a link into the corresponding file with zoneinfo data in, say, /usr/share/zoneinfo.

Duane McCully provided a nice StackOverflow answer with code that offers fallbacks via /etc/timezone (on Debian/Ubuntu) or /etc/sysconfig/clock (on RedHat/CentOS/Fedora, and rumour has it, BSD* systems) or /etc/TIMEZONE (on Solaris). The gettz micro-package essentially encodes that approach so that we have an optional fallback when Sys.timezone() comes up empty.

In the previous paragraph, note the stark absense of OS X where there seems nothing to query, and of course Windows. Contributions for either would be welcome.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sylvain Le Gall: Release of OASIS 0.4.7

Planet Debian - Sht, 10/09/2016 - 10:00md

I am happy to announce the release of OASIS v0.4.7.

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Pull request for inclusion in OPAM is pending.

Here is a quick summary of the important changes:

  • Drop support for OASISFormat 0.2 and 0.1.
  • New plugin "omake" to support build, doc and install actions.
  • Improve automatic tests (Travis CI and AppVeyor)
  • Trim down the dependencies (removed ocaml-gettext, camlp4, ocaml-data-notation)

Features:

  • findlib_directory (beta): to install libraries in sub-directories of findlib.
  • findlib_extra_files (beta): to install extra files with ocamlfind.
  • source_patterns (alpha): to provide module to source file mapping.

This version contains a lot of changes and is the achievement of a huge amount of work. The addition of OMake as a plugin is a huge progress. The overall work has been targeted at making OASIS more library like. This is still a work in progress but we made some clear improvement by getting rid of various side effect (like the requirement of using "chdir" to handle the "-C", which leads to propage ~ctxt everywhere and design OASISFileSystem).

I would like to thanks again the contributor for this release: Spiros Eliopoulos, Paul Snively, Jeremie Dimino, Christopher Zimmermann, Christophe Troestler, Max Mouratov, Jacques-Pascal Deplaix, Geoff Shannon, Simon Cruanes, Vladimir Brankov, Gabriel Radanne, Evgenii Lepikhin, Petter Urkedal, Gerd Stolpmann and Anton Bachin.

Faqet

Subscribe to AlbLinux agreguesi - Site në gjuhë të huaj