You are here

Site në gjuhë të huaj

Microsoft Partners With D-Link To Deliver Speedier Wi-Fi in Rural Regions

Slashdot.org - Hën, 21/11/2016 - 4:20md
Microsoft has partnered with networking equipment manufacturer D-Link to deliver speedier Wi-Fi to rural communities around the world. From a report on ZDNet:Dubbed "Super Wi-Fi", the wireless infrastructure is set to be based on the 802.11af protocol, and will take advantage of unused bandwidth in the lower-frequency white spaces between television channel frequencies where signals travel further than at higher frequencies. A pilot of the first phase is commencing in an unnamed American state, with trials also slated to run in three other countries. "D-Link sees ourselves at the very heart of this kind of technical innovation and development. We also acknowledge that we have a role to play in helping all countries and future generations better connect," said Sydney-based D-Link managing director for ANZ Graeme Reardon. "Our goal is to use all of our 30 years' experience and expertise and our global footprint to help deliver Super Wi-Fi as a technological platform for growth to the world's underdeveloped regions."

Read more of this story at Slashdot.

Oracle Buys Dyn DNS Provider

Slashdot.org - Hën, 21/11/2016 - 3:40md
Oracle announced today it is buying DNS provider Dyn, a company that was in the press lately after it was hit by a large-scale DDoS attack in October that resulted in many popular websites becoming inaccessible. From a TechCrunch report:Oracle plans to add Dyn's DNS solution to its bigger cloud computing platform, which already sells/provides a variety of Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) products. Oracle and Dyn didn't disclose the price of the deal but we are trying to find out. Dan Primack reports that it's around $600 million. We've also asked for a comment from Oracle about Dyn's recent breach, and whether the wheels were set in motion for this deal before or after the Mirai botnet attack in October.

Read more of this story at Slashdot.

Apple Abandons Development of Wireless Routers, To Focus On Products That Return More Profit

Slashdot.org - Hën, 21/11/2016 - 3:00md
Apple has disbanded its division that develops wireless routers in a move that further sharpens the company's focus on consumer products that generate the bulk of its revenue, Bloomberg reports. From the article:Apple began shutting down the wireless router team over the past year, dispersing engineers to other product development groups, including the one handling the Apple TV. Apple hasn't refreshed its routers since 2013 following years of frequent updates to match new standards from the wireless industry. The decision to disband the team indicates the company isn't currently pushing forward with new versions of its routers. Routers are access points that connect laptops, iPhones and other devices to the web without a cable. Apple currently sells three wireless routers, the AirPort Express, AirPort Extreme, and AirPort Time capsule. The Time capsule doubles as a backup storage hard drive for Mac computers.

Read more of this story at Slashdot.

Andre Klapper: Code review in open source projects: Influential factors and actions

Planet GNOME - Hën, 21/11/2016 - 1:51md

Coming from “Prioritizing volunteer contributions in free software development”, the Wikimedia Foundation allowed me to spend time on research about code review (CR) earlier in 2016. The theses and bullet points below incorporate random literature and comments from numerous people.
While the results might also be interesting for other free and open source software projects, they might not apply to your project for various reasons.

In Wikimedia we would like to review and merge better code faster. Especially patches submitted by volunteers. Code Review should be a tool and not an obstacle.
Benefits of Code Review are knowledge transfer, increased team awareness, and finding alternative solutions. Good debates help to get to a higher standard of coding and drives quality.[A1]

I see three dimensions of potential influential factors and potential actions (that often cannot be cleanly separated):

  • 3 aspects: social, technical, organizational.
  • 2 roles: contributor, reviewer.
  • 3 factors: Patch-Acceptance/Positivity-Likeliness, Patch-Time-to-review/merge, Contributor onboarding (not covered here).

In general, “among the factors we studied, non-technical (organizational and personal) ones are betters predictors” (means: possible factors that might affect the outcome and interval of the code review process) “compared to traditional metrics such as patch size or component, and bug priority.”[S1]

Note that Wikimedia plans to migrate its code review infrastructure from Gerrit to Phabricator Differential at some point.

Unstructured review approach

An unstructured review approach potentially demotivates first patch contributors, but fast and structured feedback is crucial for keeping them engaged.

Set up and document a multi-phase, structured patch review process for reviewers: Three steps proposed by Sarah Sharp for maintainers / reviewers[A2], quoting:

  • Fast feedback whether it is wanted: Is the idea behind the contribution sound? / Do we want this? Yes, no. If the contribution isn’t useful or it’s a bad idea, it isn’t worth reviewing further. Or “Thanks for this contribution! I like the concept of this patch, but I don’t have time to thoroughly review it right now. Ping me if I haven’t reviewed it in a week.” The absolute worst thing you can do during phase one is be completely silent.[A2]
  • Architecture: Is the contribution architected correctly? Squash the nit-picky, perfectionist part of yourself that wants to comment on every single grammar mistake or code style issue. Instead, only include a sentence or two with a pointer to coding style documentation, or any tools they will need to run their contribution through.[A2]
  • Polishing: Is the contribution polished? Get to comment on the meta (non-code) parts of the contribution. Correct any spelling or grammar mistakes, suggest clearer wording for comments, and ask for any updated documentation for the code[A2]
Lack of enough skillful, available, confident reviewers and mergers

Not enough skillful or available reviewers and potential lack of confident reviewers[W1]? Not enough reviewers with rights to actually merge into the codebase?

  • Capacity building: Discuss consider handing out code review rights to more (trusted) volunteers by recognizing active users who mark patches as good-to-go or needs-improvement (based on statistics)? Encourage them to become habitual and trusted reviewers; actively nominate to become maintainers[W2]? Potentially recognize people not executing their code review rights anymore. Again this requires statistics (to identify very active reviewers) and stakeholders (to decide on nominations).
  • Review current code review patch approval handout practice (see Wikimedia’s related documentation about +2 rights in Gerrit).
  • Consider establishing prestigious roles for people, like “Reviewers”?[W3]
  • “we recommend including inexperienced reviewers so that they can gain the knowledge and experiences required to provide useful comments to change authors”[S2]; Reviewers who have prior experience give more useful comments as they have more knowledge about design constraints and implementation.[S2]
Under-resourced or unclear responsibilities

Lack of repository owners / maintainers, or under-resourced or unclear responsibilities when everyone expects someone else to review. (For the MediaWiki core code repository specifically, see related tasks T115852 and T1287.)

“Changes failing to capture a reviewer’s interest remain unreviewed”[S3] due to self-selecting process of reviewers, or everybody expects another person in the team to review. “When everyone is responsible for something, nobody is responsible”[W4].

  • Have better statistics (on proposed patches waiting for review for a long time) to identify unmaintained areas within a codebase or codebases with unclear maintenance responsibilities.
  • Define a role to “Assign reviews that nobody selects.”[S3] (There might be (old) code areas that only one or zero developers understand.) Might need an overall “Code Review wrangler” position similar to a Bugwrangler/Bugmaster.
  • Clarify and centrally document which Engineering/Development/Product teams are responsible for which codebases, and Team/Maintainer ⟷ Codebase/Repository relations (Example: “How Wikimedia Foundation’s Reading team manages extensions”)
  • Actively outreach to volunteers for unmaintained codebases via Requesting repository ownership? Might need an overall “Code Review wrangler” position similar to a Bugwrangler/Bugmaster.
  • Advertise a monthly “Project in need of a maintainer” campaign on a technical mailing list and/or blog posts?
Hard to identify good reviewer candidates

Hard for new contributors to identify and add good reviewers.

“choice of reviewers plays an important role on reviewing time. More active reviewers provide faster responses” but “no correlation between the amount of reviewed patches on the reviewer positivity”.[S1]

  • Check “owners” tool in Phabricator “for assigning reviewers based on file ownership”[W5] so reviewers get notified of patches in their areas of interest. In Gerrit this exists but is limited.
  • Encourage people to become project members/watchers.[W6]
  • Organization specific: Either have automated updating of outdated manual list of Developers/Maintainers, or replace individual names on the list of Developers/Maintainers by links to Phabricator project description pages.
  • In the vague technical future, automatic reviewer suggestion systems could help[S2], like automatically listing people who lately touched code in a code repository or related tasks in an issue tracking system and the length of their current review queue. (Proofs of concept have been published in scientific papers but code is not always made available.)
Unhelpful reviewer comments

Due to unhelpful reviewer comments, contributors spend time on creating many revisions/iterations before successful merge.

  • Make sure documentation for reviewers states:
    • Reviewers’ CR comments considered useful by contributors: identifying functional issues; identifying corner cases potentially not covered; suggestions for APIs/designs/code conventions to follow.[S2]
    • Reviewers’ CR comments considered somewhat useful by contributors: coding guidelines; identifying alternative implementations or refactoring[S2]
    • Reviewers’ CR comments considered not useful by contributors: Authors consider reviewers praising on code segments, reviewers asking questions to understand the implementation, and reviewers pointing out future issues not related to the specific code (should be filed as tasks) as not useful.[S2]
    • Avoid negativity and ask the right questions the right way. As a reviewer, ask questions instead of making demands to foster a technical discussion: “What do you think about…?” “Did you consider…?” “Can you clarify…?” “Why didn’t you just…” provides a judgement, putting people on the defensive. Be positive.[A1]
    • If you learned something or found something particular well, give compliments. (As code review is often about critical feedback only.)[A1]
    • Tool specific: Agree and document how to use Gerrit’s negative review (CR-1): “Some people tend to use it in an ‘I don’t like this but go ahead and merge if you disagree’ sense which usually does not come across well. OTOH just leaving a comment makes it very hard to keep track – I have been asked in the past to -1 if I don’t like something but don’t consider it a big deal, because that way it shows up in Gerrit as something that needs more work.”[W7]
    • Stakeholders with different expertise areas to review aspects need to split reviewing parts of a larger patch.
Weak review culture

Prioritization / weak review culture: more pressure to write new code than to review patches contributed? Code review “application is inconsistent and enforcement uneven.”[W8]

  • Introduce and foster routine and habit across developers to spend a certain amount of time each day for reviewing patches (or part of standup), and team peer review on complex patches[A1].
  • Write code to display “a prominent indicator of whether or not you’ve pushed more changesets than you’ve reviewed”[W9]?
  • Technical: Allow finding / explicitly marking first contributions by listing recent first contributions and their time to review on korma’s code_contrib_new_gone in T63563. Someone responsible to ping, follow up, and (with organizational knowledge) to add potential reviewers to such first patches. Might need an overall “Code Review wrangler” position similar to a Bugwrangler/Bugmaster.
  • Organization specific: Contact the WMF Team Practices Group about their thoughts how this can be fostered?
Workload of existing reviewers

Workload of existing reviewers; too many items on their list already.

Reviewer’s Queue Length: “the shorter the queue, the more likely the reviewer is to do a thorough review and respond quickly” and the longer the more likely it takes longer but “better chance of getting in” (due to more sloppy review?)[S1].

  • Code review tool support to propose reviewers or display on how many unreviewed patches a reviewer is already added so the author can choose other reviewers. Proposal to add reviewers to patches[W2] but this requires already good knowledge of the community members as otherwise it just creates more noise.
  • Potentially document that “two reviewers find an optimal number of defects – the cost of adding more reviewers isn’t justified […]”[S3]
    • Documentation for reviewers: “we should encourage people to remove themselves from reviewers when they are certain they won’t review the patch. A lot of noise and wasted time is created by the fact that people are unable to keep their dashboards clean”[WA]
  • Tool specific: Gerrit’s negative review (CR-1) gets lost when a reviewer removes themselves (bug report) hence Gerrit lists (more) items which look unreviewed. Check if same problem exists in Phabricator Differential?
  • Tool specific: Agree whether ‘Patch cannot be merged due to conflicts; needs rebasing’ should be a reason to give CR-1[WB] in order to get a ‘cleaner’ list? (But depending on the Continuous Integration infrastructure tools of your project, such rejection via static analysis might happen automatically anyway.)
Poor quality of contributors’ patches

Due to poor quality of contributors’ patches, reviewers spend time on reviewing many revisions/iterations before successful merge. Might make reviewers ignore instead of reviewing again and again giving yet another negative CR-1 review.

  • Make sure documentation for contributors states:
    • Small, independent, complete patches are more likely to be accepted.[S4]
    • “[I]f there are more files to review [in your patch], then a thorough review takes more time and effort”[S2] and “review effectiveness decreases with the number of files in the change set.”[S2]
    • Small patches (a maximum of 4 lines changed) “have a higher chance to be accepted than average, while large patches are less likely to be accepted” (probability) but “one cannot determine that the patch size has a significant influence on the time until a patch is accepted” (time)[S5]
    • Patch Size: “Review time [is] weakly correlated to the patch size” but “Smaller patches undergo fewer rounds of revisions”[S1]
    • Reasons for rejecting a patch (not all are equally decisive; “less decisive reasons are usually easier to judge” when it comes to costs explaining rejections):[S6]
      • Problematic implementation or solution: Compilation errors; Test failures; Incomplete fix; Introducing new bugs; Wrong direction; Suboptimal solution works but there is a more simple or efficient way); Solution too aggressive for end users; Performance; Security
      • Difficult to read or maintain: Including unnecessary changes (to split into separate patch); Violating coding style guidelines; Bad naming (e.g. variable names); Patch size too large (but rarely matters as it’s ambiguous – if necessary it’s not a problem); Missing docs; Inconsistent or misleading docs; No accompanied test cases (When should “No accompanied test cases” be a reason for a negative review? In which cases do we require unit tests?[W4] This should be more deterministic); Integration conflicts with existing code; Duplication; Misuse of API; risky changes to internal APIs; not well isolated
      • Deviating from the project focus or scope: Idea behind is not of core interest; irrelevant or obsolete
      • Affecting the development schedule / timing: Freeze; low urgency; Too late
      • Lack of communication or trust: Unresponsive patch authors; no discussion prior to patch submission; patch authors’ expertise and reputation[S6]
      • cf. Reasons of the Phabricator developers why patches can get rejected
    • There is a mismatch of judgement: Patch reviewers consistently consider test failures, incomplete fix, introducing new bugs, suboptimal solution, inconsistent docs way more decisive for rejecting than authors.[S6]
    • Propose guidelines for writing acceptable patches:[S6]
      • Authors should make sure that patch is in scope and relevant before writing patch
      • Authors should be careful to not introduce new bugs instead of only focussing on the target
      • Authors should not only care if the patch works well but also whether it’s an optimal solution
      • Authors should not include unnecessary changes and should check that corner cases are covered
      • Authors should update or create related documentation[S6] (for Wikimedia, see Development policy)
    • Patch author experience is relevant: Be patient and grow. “more experienced patch writers receive faster responses” plus more positive ones. In WebKit, contributors’ very first patch is likely to get positive feedback while for their 3rd to 6th patch it is harder.[S1]
  • Agree on who is responsible for testing and document responsibility. (Tool specific: Phabricator Differential can force patch authors to fill out a test plan.)[W7]

Likeliness of patch acceptance depends on: Developer experience, patch maturity; Review time impacted by submission time, number of code areas affected, number of suggested reviewers, developer experience.[S7]

Hard to realize a repository is unmaintained

Hard to realize how (in)active a repository is for a potential contributor.

  • Implement displaying “recent activity” information somewhere in the code repository browser and code review tool, to communicate expectations.
  • Have documentation that describe steps how to ask for help and/or take over maintainership, to allow contributors to act if interested in the code repository. For Wikimedia these docs are located at Requesting repository ownership.
No culture to improve changesets by other contributors

Changesets are rarely picked up by other developers[WB]. After merging, “it is very difficult to revert it or to get original developers to help fix some broken aspect of a merged change”[WB] regarding followup fixing culture.

  • Document best practices to amend a change written by another contributor if you are interested in bringing the patch forward.
Hard to find related patches

Hard to find existing “related” patches in a certain code area when working on your own patch in that area, or when reviewing several patches in the same code area. (Hence there might also be some potential rebase/merge conflicts[WB] to avoid if possible.)

  • Phabricator Differential offers “Recent Similar Open Revisions”.[WC] Gerrit might have such a feature in a newer version.[WD]
Lack of synchronization between teams

Lack of synchronization between developer teams: team A stuck because team B doesn’t review their patches?

  • Organization specific: Wikimedia has regular “Scrum of Scrum” meetings of all scrum masters across teams, to communicate when the work of a team is blocked by another team.

Comment which important factors that you have experienced are missing!

References

Reproducible builds folks: Reproducible Builds: week 82 in Stretch cycle

Planet Debian - Hën, 21/11/2016 - 1:47md

What happened in the Reproducible Builds effort between Sunday November 13 and Saturday November 19 2016:

Media coverage Elsewhere in Debian
  • dpkg 1.18.14 has migrated to stretch.

  • Chris Lamb filed #844431 ("packages should build reproducibly") against debian-policy.

  • Ximin worked on glibc reproducibility this week, catching some bugs in disorderfs, FUSE, as well as glibc itself.

Documentation update Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages

43 package reviews have been added, 4 have been updated and 12 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been updated:

4 issue types have been added:

Weekly QA work

During our reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (26)
  • Daniel Stender (1)
  • Filip Pytloun (1)
  • Lucas Nussbaum (28)
  • Michael Biebl (1)
strip-nondeterminism development disorderfs development
  • #844498 ("disorderfs: using it for building kills the host")
debrebuild development

debrebuild is new tool proposed by HW42 and josch (see #774415: "From srebuild sbuild-wrapper to debrebuild").

debrepatch development

debrepatch is a set of scripts that we're currently developing to make it easier to track unapplied patches. We have a lot of those and we're not always sure if they still work. The plan is to set up jobs to automatically apply old reproducibility patches to newer versions of packages and notify the right people if they don't apply and/or no longer make the package reproducible.

debpatch is a component of debrepatch that applies debdiffs to Debian source packages. In other words, it is to debdiff(1) what patch(1) is to diff(1). It is a general tool that is not specific to Reproducible Builds. This week, Ximin Luo worked on making it more "production-ready" and will soon submit it for inclusion in devscripts.

reprotest development

Ximin Luo significantly improved reprotest, adding presets and auto-detection of which preset to use. One can now run e.g. reprotest auto . or reprotest auto $pkg_$ver.dsc instead of the long command lines that were needed before.

He also made it easier to set up build dependencies inside the virtual server and made it possible to specify pre-build dependencies that reprotest itself needs to set up the variations. Previously one had to manually edit the virtual server to do that, which was not very usable to humans without an in-depth knowledge of the building process.

These changes will be tested some more and then released in the near future as reprotest 0.4.

tests.reproducible-builds.org
  • Debian:

    • An index of our usertagged bugs page was added by Holger after a Q+A session in Cambridge.
    • Holger also setup two new i386 builders, build12+16, for >50% increased build performance. For this, we went from 18+17 cores on two 48GB machines to 10+10+9+9 cores on four 36GB ram machines… and from 16 to 24 builder jobs. Thanks to Profitbricks for providing us with all these ressources once more!
    • h01ger also tried to enable disorderfs again, but hit #844498, which brought down the i386 builders, so he disabled it again. Next will be trying disorderfs on armhf or amd64, to see whether this bug also manifests there.
Misc.

This week's edition was written by Chris Lamb, Holger Levsen, Ximin Luo and reviewed by a bunch of Reproducible Builds folks on IRC.

Walmart Tests Blockchain For Use In Food Recalls

Slashdot.org - Hën, 21/11/2016 - 1:34md
An anonymous reader quotes a Bloomberg article about Walmart: Like most merchants, the world's largest retailer struggles to identify and remove food that's been recalled. When a customer becomes ill, it can take days to identify the product, shipment and vendor. With the blockchain, Wal-Mart will be able to obtain crucial data from a single receipt, including suppliers, details on how and where food was grown and who inspected it... "If there's an issue with an outbreak of E. coli, this gives them an ability to immediately find where it came from. That's the difference between days and minutes," says Marshal Cohen, an analyst at researcher NPD Group Inc...." In October, Wal-Mart started tracking two products using blockchain: a packaged produce item in the U.S., and pork in China. While only two items were included, the test involved thousands of packages shipped to multiple stores... If Wal-Mart adopts the blockchain to track food worldwide, it could become of the largest deployments of the technology to date. America's Centers for Disease Control and Prevention estimates roughly their recalls affect roughly 48 million people annually, according to the article, "with 128,000 hospitalized and 3,000 dying."

Read more of this story at Slashdot.

Richard Hughes: Last batch of ColorHugALS

Planet GNOME - Hën, 21/11/2016 - 12:51md

I’ve got 9 more ColorHugALS devices in stock and then when they are sold they will be no more for sale. With all the supplier costs going recently up my “sell at cost price” has turned into “make a small loss on each one” which isn’t sustainable. It’s all OpenHardware, both hardware design and the firmware itself so if someone wanted to start building them for sale they would be doing it with my blessing. Of course, I’m happy to continue supporting the existing sold devices into the distant future.

In part the original goal is fixed, the kernel and userspace support for the new SensorHID protocol works great and ambient light functionality works out of the box for more people on more hardware. I’m slightly disappointed more people didn’t get involved in making the ambient lighting algorithms more smart, but I guess it’s quite a niche area of development.

Plus, in the Apple product development sense, killing off one device lets me start selling something else OpenHardware in the future. :)

Is encrypted e-mail a must in the Trump presidential era?

LinuxSecurity.com - Hën, 21/11/2016 - 12:04md
LinuxSecurity.com: With Donald Trump poised to take over the U.S. presidency, does it make sense for all of us to move to encrypted e-mail if we want to preserve our privacy? Encrypted e-mail provider ProtonMail says yes, indeed.

Your car will be recalled in 2017 thanks to poor open-source security

LinuxSecurity.com - Hën, 21/11/2016 - 12:03md
LinuxSecurity.com: In the coming year, a high-profile auto manufacturer will be forced to recall vehicles due to a cybersecurity breach for the first time, experts have warned.

Eric Hammond: Watching AWS CloudFormation Stack Status

Planet UBUNTU - Hën, 21/11/2016 - 10:00pd

live display of current event status for each stack resource

Would you like to be able to watch the progress of your new CloudFormation stack resources like this? (press play)

That’s what the output of the new aws-cloudformation-stack-status command looks like when I launch a new AWS Git-backed Static Website CloudFormation stack.

It shows me in real time which resources have completed, which are still in progress, and which, if any, have experienced problems.

Background

AWS provides a few ways to look at the status of resources in a CloudFormation stack including the stream of stack events in the Web console and in the aws-cli.

Unfortunately, these displays show multiple events for each resource (e.g., CREATE_IN_PROGRESS, CREATE_COMPLETE) and it’s difficult to match up all of the resource events by hand to figure out which resources are incomplete and still in progress.

Solution

I created a bit of wrapper code that goes around the aws cloudformation describe-stack-events command. It performs these operations:

  1. Cuts the output down to the few fields that matter: status, resource name, type, event time.

  2. Removes all but the ost recent status event for each stack resource.

  3. Sorts the output to put the resources with the most recent status changes at the top.

  4. Repeatedly runs this command so that you can see the stack progress live and know exactly which resource is taking the longest.

I tossed the simple script up here in case you’d like to try it out:

GitHub: aws-cloudformation-stack-status

You can run it to monitor your CloudFormation stack with this command:

aws-cloudformation-stack-status --watch --region $region --stack-name $stack

Interrupt with Ctrl-C to exit.

Note: You will probably need to start your terminal out wider than 80 columns for a clean presentation.

Note: This does use the aws-cli, so installing and configuring that is a prerequisite.

Stack Delete Example

Here’s another example terminal session watching a stack-delete operation, including some skipped deletions (because of a retention policy). It finally ends with a “stack not found error” which is exactly what we hope for after a stack has been deleted successfully. Again, the resources with the most recent state change events are at the top.

Note: These sample terminal replays cut out almost 40 minutes of waiting for the creation and deletion of the CloudFront distributions. You can see the real timestamps in the rightmost columns.

Original article and comments: https://alestic.com/2016/11/aws-cloudformation-stack-status/

EFF Report Finds 74% Of Censorship News Stories Are About Facebook

Slashdot.org - Hën, 21/11/2016 - 9:34pd
An anonymous reader writes: OnlineCensorship.org just released a new report "to provide an objective, data-driven voice in the conversation around commercial content moderation." They're collecting media reports about censorship on Facebook, Twitter, Instagram, YouTube, Flickr and Google+, and have now analyzed 294 reports of content takedowns -- 74% of which pertained to Facebook. (Followed by Instagram with 16% and Twitter with 7%.) 47% of all the takedowns were nudity-related, while the next two most frequent reasons given were "real name" violations and "inappropriate content". Noting "a more visible public debate" over content moderation, the report acknowledges that 4.7 billion Facebook posts are made every day. (It also reports the "consistent refrain" from services apologizing for issues -- that "our team processes millions of reports each week...") But the most bizarre incident they've identified was the tech blogger in India who was locked out of his Facebook account in October because he shared a photo of a cat in a business suit. "It might sound stupid but this just happened to me," he told Mashable India, which reports Facebook later apologized and said it had made a mistake. Their report -- part of the EFF's collaboration with Visualizing Impact -- urges platforms to clarify their guidelines (as well as applicable laws), to explain the mechanisms being used to evaluate content and appeals, and to share those criteria when notifying users of take-downs. For example, in August Facebook inexplicably removed a 16-century sketch by Erasmus of Rotterdam detailing a right hand.

Read more of this story at Slashdot.

Arturo Borrero González: Great Debian meeting in Seville

Planet Debian - Hën, 21/11/2016 - 6:00pd

Last week we had an interesting Debian meeting in Seville, Spain. This has been the third time (in recent years) the local community meets around Debian.

We met at about 20:00 at Rompemoldes, a crafts creation space. There we had a very nice dinner while talking about Debian and FLOSS. The dinner was sponsored by the Plan4D assosiation.

The event was joined by almost 20 people which different relations to Debian:

  • Debian users
  • DDs
  • Debian contributors
  • General FLOSS interested people

I would like to thank all the attendants and Pablo Neira from Plan4D for the organization.

I had to leave the event after 3.5 hours of great talking and networking, but the rest of the people stayed there. The climate was really good :-)

Looking forward to another meeting in upcomings times!

Header picture by Ana Rey.

Android User Locked Out Of Google Accounts After Moving To A New City

Slashdot.org - Hën, 21/11/2016 - 5:30pd
Slashdot reader troublemaker_23 shares a post from ITWire An Android user has been locked out of his Google account apparently because he moved... The explanation offered by Google support staff was that since his address details differed, billing information with Google wasn't current and hence the user's purchases could look fraudulent... During his interactions with Google support to find out why he had been locked out, he was told that "It is our policy to not discuss the specific reasons for an account closure"... He was initially directed by Google staff to a site where he had to scan his driver's license and credit card and told that he would have to wait 24 hours to get his account unlocked. But after this time passed, he was told that the account would not be unlocked and Google would not tell him why. He was advised to abandon his old account and start a fresh one. However, this meant he could not use the credit card that he had used on the old account... The affected user called this "a warning to others not to put all your eggs in one basket, because these days, you have no rights over that basket whatsoever." But Friday the user posted an update on Reddit, quoting a Google staffer as saying "we routinely monitor account behavior on Google Play and take action on potentially suspicious activity. Unfortunately, in your case, your account was wrongly flagged and suspended. I have just reopened your account... I sincerely apologize for the stress and inconvenience this has caused you."

Read more of this story at Slashdot.

Steve Kemp: Detecting fraudulent signups?

Planet Debian - Hën, 21/11/2016 - 4:45pd

I run a couple of different sites that allow users to sign-up and use various services. In each of these sites I have some minimal rules in place to detect bad signups, but these are a little ad hoc, because the nature of "badness" varies on a per-site basis.

I've worked in a couple of places where there are in-house tests of bad signups, and these usually boil down to some naive, and overly-broad, rules:

  • Does the phone numbers' (international) prefix match the country of the user?
  • Does the postal address supplied even exist?

Some places penalise users based upon location too:

  • Does the IP address the user submitted from come from TOR?
  • Does the geo-IP country match the users' stated location?
  • Is the email address provided by a "free" provider?

At the moment I've got a simple HTTP-server which receives a JSON post of a new users' details, and returns "200 OK" or "403 Forbidden" based on some very very simple critereon. This is modeled on the spam detection service for blog-comments server I use - something that is itself becoming less useful over time. (Perhaps time to kill that? A decision for another day.)

Unfortunately this whole approach is very reactive, as it takes human eyeballs to detect new classes of problems. Code can't guess in advance that it should block usernames which could collide with official ones, for example allowing a username of "admin", "help", or "support".

I'm certain that these systems have been written a thousand times, as I've seen at least five such systems, and they're all very similar. The biggest flaw in all these systems is that they try to classify users in advance of them doing anything. We're trying to say "Block users who will use stolen credit cards", or "Block users who'll submit spam", by correlating that behaviour with other things. In an ideal world you'd judge users only by the actions they take, not how they signed up. And yet .. it is better than nothing.

For the moment I'm continuing to try to make the best of things, at least by centralising the rules for myself I cut down on duplicate code. I'll pretend I'm being cool, modern, and sexy, and call this a micro-service! (Ignore the lack of containers for the moment!)

Stephen Michael Kellat: Ubuntu Community Appreciation Day 2016

Planet UBUNTU - Hën, 21/11/2016 - 4:28pd

A screenshot of Ubuntu Planet showing a blog post by Svetlana Belkin

I had almost forgotten about Ubuntu Community Appreciation Day 2016. There is much for me to appreciate this year. I have had to absent myself from many community activities and functions for almost the entire year. There have been random blog posts and I have popped up on mailing lists at the weirdest of times but I have mostly been gone.

Being under audit and investigation for almost the entirety of 2016 can do that to you. Working in a government job also causes such things to happen, too. Thankfully I’m not moving onward and upward to higher office but I’m now thoroughly vetted for all sorts of lateral moves.

The Xubuntu Sticker from SpreadUbuntu.org found at http://spreadubuntu.org/en/material/sticker/xubuntu-sticker made by lyz

I thoroughly appreciate and miss the Xubuntu team. A great distro continues to be made. I wish I was still there to contribute. Life right now says I have other missions to undertake especially as social fabric in the United States of America seems to get all bendy and twisty.

Tomorrow is another day.

2016 Winners Announced For Interactive Fiction Competition

Slashdot.org - Hën, 21/11/2016 - 3:30pd
An anonymous reader writes: This week IFComp 2016 announced the winners in their 22nd annual interactive fiction competition. After a seven-week play period, the entry with the highest average rating was "the noir standout 'Detectiveland' by Robin Johnson," according to contest organizers (while the game earning the lowest score was "Toiletworld.") A special prize is also awarded each year -- the Golden Banana of Discord -- for the game which provoked the most wildly different ratings. This year that award went to "A Time of Tungsten" by Devin Raposo. ("The walls are high, the hole is deep. She is trapped, on a distant planet. Watched. She may not survive...") The games will soon be released on the official IF Archive site, but in the meantime you can download a 222-megabyte archive of all 58 games.

Read more of this story at Slashdot.

Ask Slashdot: Could A 'Smart Firewall' Protect IoT Devices?

Slashdot.org - Hën, 21/11/2016 - 1:30pd
To protect our home networks from IoT cracking, Ceaus wants to see a smart firewall: It's a small box (the size of a Raspberry Pi) with two ethernet ports you put in front of your ISP router. This firewall is capable of detecting your IoT devices and blocking their access to the internet, only and exclusively allowing traffic for the associated mobile app (if there is one). All other outgoing IoT traffic is blocked... Once you've plugged in your new IoT toaster, you press the "Scan" button on the firewall and it does the rest for you. This would also block "snooping" from outside your home network, and of course, keep your devices off botnets. The original submission asks "Does such a firewall exist? Is this a possible Kickstarter project?" So leave your best answers in the comments. Could a smart firewall protect IoT devices?

Read more of this story at Slashdot.

Kubuntu: Welcome new Kubuntu Members

Planet UBUNTU - Dje, 20/11/2016 - 11:57md

Friday November 18 was a productive day for the Kubuntu Community, as three new people were questioned and then elected into Membership. Welcome Simon Quigley, José Manuel Santamaría, and Walter Lapchynski as they package, work on our tooling, promote Kubuntu and help users.

Read more about Kubuntu Membership here: https://community.kde.org/Kubuntu/Membership

Steinar H. Gunderson: Nageru documentation

Planet Debian - Dje, 20/11/2016 - 10:45md

Even though the World Chess Championship takes up a lot of time these days, I've still found some time for Nageru, my live video mixer. But this time it doesn't come in form of code; rather, I've spent my time writing documentation.

I spent some time fretting over what technical solution I wanted. I explicitly wanted end-user documentation, not developer documentation—I rarely find HTML-rendered versions of every member function in a class the best way to understand a program anyway. Actually, on the contrary: Having all sorts of syntax interwoven in class comments tends to be more distracting than anything else.

Eventually I settled on Sphinx, not because I found it fantastic (in particular, ReST is a pain with its bizarre variable punctuation-based syntax), but because I'm convinced it has all the momentum right now. Just like git did back in the day, the fact that the Linux kernel has chosen it means it will inevitably grow a quite large ecosystem, and I won't be ending up having to maintain it anytime soon.

I tried finding a balance between spending time on installation/setup (only really useful for first-time users, and even then, only a subset of them), concept documentation (how to deal with live video in general, and how Nageru fits into a larger ecosystem of software and equipment) and more concrete documentation of all the various features and quirks of Nageru itself. Hopefully, most people will find at least something that's not already obvious to them, without drowning in detail.

You can read the documentation at https://nageru.sesse.net/doc/, or if you want to send patches, the right place to patch is the git repository.

Svetlana Belkin: Ubuntu Community Appreciation Day 2016

Planet UBUNTU - Dje, 20/11/2016 - 7:06md

It’s that time of the year where we appreciate the members of our Ubuntu Community, Member or not.

This year I appreciate a group of people and three others.  The group is the one that gone to Ohio Linux Fest this year.  I thank you for all of the fun!

The first person that I appreciate is for his Tweet about me (which explains everything why I choose him), which is  Benjamin Kerensa:

I’m always inspired by @senseopenness she leads and keeps people inspired

— Benjamin Kerensa (@bkerensa) June 9, 2016

The second person is Simon Quigley who is quite an awesome kid.  Over the last year, he has really changed his attitude and even his behavior where it doesn’t not sound like he is a 14 year-old but older.  Because he is a youngster,  he has a good chance that he will get a job within Open Source, development-wise or anything else.

Last but not least, the last person is Pavel Sayekat. Like Simon, he also has improved and now is helping to get his LoCo, Ubuntu Bangladesh, active again.

Keep it up everyone for making the Community the way it is!

Faqet

Subscribe to AlbLinux agreguesi - Site në gjuhë të huaj