You are here

Agreguesi i feed

4.18.13: stable

Kernel Linux - Mër, 10/10/2018 - 8:56pd
Version:4.18.13 (stable) Released:2018-10-10 Source:linux-4.18.13.tar.xz PGP Signature:linux-4.18.13.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.18.13

4.14.75: longterm

Kernel Linux - Mër, 10/10/2018 - 8:54pd
Version:4.14.75 (longterm) Released:2018-10-10 Source:linux-4.14.75.tar.xz PGP Signature:linux-4.14.75.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.14.75

4.9.132: longterm

Kernel Linux - Mër, 10/10/2018 - 8:53pd
Version:4.9.132 (longterm) Released:2018-10-10 Source:linux-4.9.132.tar.xz PGP Signature:linux-4.9.132.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.9.132

4.4.160: longterm

Kernel Linux - Mër, 10/10/2018 - 8:52pd
Version:4.4.160 (longterm) Released:2018-10-10 Source:linux-4.4.160.tar.xz PGP Signature:linux-4.4.160.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.4.160

next-20181010: linux-next

Kernel Linux - Mër, 10/10/2018 - 7:43pd
Version:next-20181010 (linux-next) Released:2018-10-10

4.19-rc7: mainline

Kernel Linux - Dje, 07/10/2018 - 5:35md
Version:4.19-rc7 (mainline) Released:2018-10-07 Source:linux-4.19-rc7.tar.gz Patch:full (incremental)

Hans de Goede: Announcing flickerfree boot for Fedora 29

Planet GNOME - Hën, 01/10/2018 - 2:11md
A big project I've been working on recently for Fedora Workstation is what we call flickerfree boot. The idea here is that the firmware lights up the display in its native mode and no further modesets are done after that. Likewise there are also no unnecessary jarring graphical transitions.

Basically the machine boots up in UEFI mode, shows its vendor logo and then the screen keeps showing the vendor logo all the way to a smooth fade into the gdm screen. Here is a video of my main workstation booting this way.

Part of this effort is the hidden grub menu change for Fedora 29. I'm happy to announce that most of the other flickerfree changes have also landed for Fedora 29:

  1. There have been changes to shim and grub to not mess with the EFI framebuffer, leaving the vendor logo intact, when they don't have anything to display (so when grub is hidden)

  2. There have been changes to the kernel to properly inherit the EFI framebuffer when using Intel integrated graphics, and to delay switching the display to the framebuffer-console until the first kernel message is printed. Together with changes to make "quiet" really quiet (except for oopses/panics) this means that the kernel now also leaves the EFI framebuffer with the logo intact if quiet is used.

  3. There have been changes to plymouth to allow pressing ESC as soon as plymouth loads to get detailed boot messages.

With all these changes in place it is possible to get a fully flickerfree boot today, as the video of my workstation shows. This video is made with a stock Fedora 29 with 2 small kernel commandline tweaks:

  1. Add "i915.fastboot=1" to the kernel commandline, this removes the first and last modeset during the boot when using the i915 driver.

  2. Add "plymouth.splash-delay=20" to the kernel commandline. Normally plymouth waits 5 seconds before showing the charging Fedora logo so that on systems which boot in less then 5 seconds the system simply immediately transitions to gdm. On systems which take slightly longer to boot this makes the charging Fedora logo show up, which IMHO makes the boot less fluid. This option increases the time plymouth waits with showing the splash to 20 seconds.

So if you have a machine with Intel integrated graphics and booting in UEFI mode, you can give flickerfree boot support a spin with Fedora 29 by just adding these 2 commandline options. Note this requires the new grub hidden menu feature to be enabled, see the FAQ on this.

The need for these 2 commandline options shows that the work on this is not yet entirely complete, here is my current TODO list for finishing this feature:

  1. Work with the upstream i915 driver devs to make i915.fastboot the default. If you try i915.fastboot=1 and it causes problems for you please let me know.

  2. Write a new plymouth theme based on the spinner theme which used the vendor logo as background and draws the spinner beneath it. Since this keeps the logo and black background as is and just draws the spinner on top this avoids the current visually jarring transition from logo screen to plymouth, allowing us to set plymouth.splash-delay to 0. This also has the advantage that the spinner will provide visual feedback that something is actually happening as soon as plymouth loads.

  3. Look into making this work with AMD and NVIDIA graphics.

Please give the new flickerfree support a spin and let me know if you have any issues with it.

Hans de Goede: GRUB hidden menu change FAQ

Planet GNOME - Hën, 01/10/2018 - 1:58md
There have questions about the new GRUB hidden menu Change in various places, here is a FAQ which hopefully answers most questions:
1. What is the GRUB hidden menu change?See the Detailed Description on the change page. The main motivation for adding this is to get to a fully flickerfree boot.
2. How to enable hidden GRUB menu?On new Fedora 29 Workstation installs this will be enabled by default. If your system has been upgraded to F29 from an older release, you can enable it by running these commands:

On a system using UEFI booting ("ls /sys/firmware/efi/efivars" returns a bunch of files):

sudo grub2-editenv - set menu_auto_hide=1
sudo grub2-mkconfig -o /etc/grub2-efi.cfg


On a system using legacy BIOS boot:

sudo grub2-editenv - set menu_auto_hide=1
sudo grub2-mkconfig -o /etc/grub2.cfg


Note the grub2-mkconfig will overwrite any manual changes you've made to your grub.cfg (normally no manually changes are done to this file).

If your system has Windows on it, but you boot it only once a year so you would still like to hide the GRUB menu, you can tell GRUB to ignore the presence of Windows by running:

sudo grub2-editenv - set menu_auto_hide=2
3. How to disable hidden GRUB menuTo permanently disable the auto-hide feature run:

sudo grub2-editenv - unset menu_auto_hide

That is it.
4.How to access the GRUB menu when hiddenIf for some reason you need to access the GRUB menu while it is hidden there are multiple ways to get to it:

  1. If you can get to gdm, access the top-right menu (the system menu) and click on the power [⏻] icon. Then keep ALT pressed to change the "Restart" option into "Boot Options" and click "Boot Options".

  2. While booting keep SHIFT pressed, usually you need to first press SHIFT when the vendor logo is shown by the firmware / when the firmware says e.g. "Press F2 to enter setup" if you press it earlier it may not be seen. Note this may not work on some machines.

  3. During boot press ESC or F8 while GRUB loads (simply press the key repeatedly directly after power on until you are in the menu).

  4. Force the previous boot to be considered failed:

    1. Press CTRL + ALT + DEL while booting so that the system reboots before hitting gdm

    2. Press CTRL + ALT + F6 to switch away from gdm, followed by CTRL + ALT + DEL.

    3. Press the power-button for 4 seconds to force the machine off.

    Either of these will cause the boot_success grub_env flag to not get set and
    the menu will show the next boot.

  5. Manually set the menu show once flag by running: "grub-set-bootflag menu_show_once" This will cause the menu to show for 60 seconds before continuing with the default boot-option.

5. When is a boot considered successful ?The boot_success grub_env flag gets set when you login as a normal user and your session lasts at least 2 minutes; or when you shutdown or restart the system from the GNOME system (top-right) menu.

So if you e.g. login, do something and then within 30 seconds type reboot in a terminal (instead of doing the reboot from the menu) then this will not count as a successful boot and the menu will show the next boot.

Free software activities in September 2018

Planet Debian - Dje, 30/09/2018 - 9:02md

Here is my monthly update covering what I have been doing in the free software world during September 2018 (previous month):

More hacking on the Lintian static analysis tool for Debian packages:

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:

Debian
  • As a member of the Debian Python Module Team I pushed a large number of changes across 100s of repositories including removing empty debian/patches/series & debian/source/options files, correcting email addresses, dropping generated .debhelper dirs, removing trailing whitespaces, respecting the nocheck build profile via DEB_BUILD_OPTIONS and correcting spelling mistakes in debian/control files.

  • Added a missing dependency on golang-golang-x-tools for digraph(1) in dh-make-golang as part of the Debian Go Packaging Team.


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project

  • "Frontdesk" duties, triaging CVEs, responding to user questions, etc.

  • Issued DLA 1492-1 fixing a string injection vulnerability in the dojo Javascript library.

  • Issued DLA 1496-1 to correct an integer overflow vulnerability in the "Little CMS 2" colour management library. A specially-crafted input file could have lead to a heap-based buffer overflow.

  • Issued DLA 1498-1 for the curl utility to fix an integer overflow vulnerability (background).

  • Issued DLA 1501-1 to fix an out-of-bounds read vulnerability in libextractor, a tool to extract meta-data from files of arbitrary type.

  • Issued DLA 1503-1 to prevent a potential denial of service and a potential arbitrary code execution vulnerability in the kamailio SIP (Session Initiation Protocol) server. A specially-crafted SIP message with an invalid Via header could cause a segmentation fault and crash the server due to missing input validation.

  • Issued ELA 34-1 for the Redis key-value database where the redis-cli tool could have allowed an attacker to achieve code execution and/or escalate to higher privileges via a specially-crafted command line.


Uploads

I also uploaded the following packages as a member of the Debian Python Module Team: django-ipware (2.1.0-1), django-adminaudit (0.3.3-2), python-openid (2.2.5-7), python-social-auth (1:0.2.21+dfsg-3). python-vagrant (0.5.15-2) & python-validictory (0.8.3-3)

Finally, I sponsored the following uploads: bm-el (201808-1), elpy (1.24.0-1), mutt-alias-el (1.5-1) & android-platform-external-boringssl (8.1.0+r23-2).


Debian bugs filed


FTP Team

As a Debian FTP assistant I ACCEPTed 81 packages: adios, android-platform-system-core, aom, appmenu-registrar, astroid2, black, bm-el, colmap, cowpatty, devpi-common, equinox-bundles, fabulous, fasttracker2, folding-mode-el, fontpens, ganeti-2.15, geomet, golang-github-google-go-github, golang-github-gregjones-httpcache, hub, infnoise, intel-processor-trace, its-playback-time, jsonb-api, kitinerary, kpkpass, libclass-tiny-chained-perl, libmoox-traits-perl, librda, libtwitter-api-perl, liburl-encode-perl, libwww-oauth-perl, llvm-toolchain-7, lucy, markdown-toc-el, mmdebstrap, mozjs60, mutt-alias-el, nvidia-graphics-drivers-legacy-390xx, o-saft, pass-tomb, pass-tomb-basic, pgformatter, picocli, pikepdf, pipewire, poliastro, port-for, pyagentx, pylint2, pynwb, pytest-flask, python-argon2, python-asteval, python-caldav, python-djangosaml2, python-pcl, python-persist-queue, python-rfc3161ng, python-treetime, python-x2go, python-x3dh, python-xeddsa, rust-crossbeam-deque, rust-iovec, rust-phf-generator, rust-simd, rust-spin, rustc, sentinelsat, sesman, sphinx-autobuild, sphinxcontrib-restbuilder, tao-pegtl, trojan, ufolib2, ufonormalizer, unarr, vlc-plugin-bittorrent, xlunzip & xxhash.

I additionally filed 6 RC bugs against packages that had potentially-incomplete debian/copyright files against adios, pgformatter, picocli, python-argon2, python-pcl & python-treetime.

Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

nanotime 0.2.3

Planet Debian - Dje, 30/09/2018 - 5:14md

A minor maintenance release of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it now uses a more rigorous S4-based approach thanks to a rewrite by Leonardo Silvestri.

This release disables some tests on the Slowlaris platform we are asked to conform to (which is a good thing as wider variety of test platforms widens test converage) yet have no real access to (which is bad thing, obviously) beyind what the helpful rhub service offers. We also updated the Travis setup. No code changes.

Changes in version 0.2.3 (2018-09-30)
  • Skip some tests on Solaris which seems borked with timezones. As we have no real, no fixed possible (Dirk in #42).

  • Update Travis setup

Once this updates on the next hourly cron iteration, we also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

All i wanted to do is check an error code

Planet Debian - Dje, 30/09/2018 - 2:03md
I was feeling a little under the weather last week and did not have enough concentration to work on developing a new NetSurf feature as I had planned. Instead I decided to look at a random bug from our worryingly large collection.

This lead me to consider the HTML form submission function at which point it was "can open, worms everywhere". The code in question has a fairly simple job to explain:
  1. A user submits a form (by clicking a button or such) and the Document Object Model (DOM) is used to create a list of information in the web form.
  2. The list is then converted to the appropriate format for sending to the web site server.
  3. An HTTP request is made using the correctly formatted information to the web server.
    However the code I was faced with, while generally functional, was impenetrable having accreted over a long time.

    At this point I was forced into a diversion to fix up the core URL library handling of query strings (this is used when the form data is submitted as part of the requested URL) which was necessary to simplify some complicated string handling and make the implementation more compliant with the specification.

    My next step was to add some basic error reporting instead of warning the user the system was out of memory for every failure case which was making debugging somewhat challenging. I was beginning to think I had discovered a series of very hairy yaks although at least I was not trying to change a light bulb which can get very complicated.

    At this point I ran into the form_successful_controls_dom() function which performs step one of the process. This function had six hundred lines of code, hundreds of conditional branches 26 local variables and five levels of indentation in places. These properties combined resulted in a cyclomatic complexity metric of 252. For reference programmers generally try to keep a single function to no more than a hundred lines of code with as few local variables as possible resulting in a CCM of 20.

    I now had a choice:

    • I could abandon investigating the bug, because even if I could find the issue changing such a function without adequate testing is likely to introduce several more.
    • I could refactor the function into multiple simpler pieces.
    I slept on this decision and decided to at least try to refactor the code in an attempt to pay back a little of the technical debt in the browser (and maybe let me fix the bug). After several hours of work the refactored source has the desirable properties of:
    • multiple straightforward functions
    • no function much more than a hundred lines long
    • resource lifetime is now obvious and explicit
    • errors are correctly handled and reported

    I carefully examined the change in generated code and was pleased to see the compiler output had become more compact. This is an important point that less experienced programmers sometimes miss, if your source code is written such that a compiler can reason about it easily you often get much better results than the compact alternative. However even if the resulting code had been larger the improved source would have been worth it.
    After spending over ten hours working on this bug I have not resolved it yet, indeed one might suggest I have not even directly considered it yet! I wanted to use this to explain a little to users who have to wait a long time for their issues to get resolved (in any project not just NetSurf) just how much effort is sometimes involved in a simple bug.
    Vincent Sanders noreply@blogger.com Vincents Random Waffle

    Philip Chimento: The Parable of the Code Review

    Planet GNOME - Dje, 30/09/2018 - 7:22pd

    Last week’s events, with Linus Torvalds pledging to stop behaving like an asshole, instituting a code of conduct in Linux kernel development, and all but running off to join a monastery, have made a lot of waves. The last bastion of meritocracy has fallen! Linus, the man with five middle fingers on each hand, was going to save free software from ruin by tellin’ it like it is to all those writers of bad patches. Now he has gone over to the Dark Side, etc., etc.

    There is one thing that struck me when reading the arguments last week, that I never realized before (as I guess I tend to avoid reading this type of material): the folks who argue against, are convinced that the inevitable end result of respectful behaviour is a weakening of technical skill in free software. I’ve read from many sources last week the “meritocracy or bust” argument that meritocracy means three things: the acceptance of patches on no other grounds than technical excellence, the promotion of no other than technically excellent people to maintainer positions within projects, and finally the freedom to disrespect people who are not technically excellent. As I understand these people’s arguments, the meritocracy system works, so removing any of these three pillars is therefore bound to produce worse results than meritocracy. Some go so far as to say that treating people respectfully, would mean taking technically excellent maintainers and replacing them with less proficient people chosen for how nice1 they are.

    I never considered the motivations that way; maybe I didn’t give much thought to why on earth someone would argue in favour of behaving like an asshole. But it reminded me of a culture shift that happened a number of years ago, and that’s what this post is about.

    Back in the bad old days…

    It used to be that we didn’t have any code review in the free software world.

    Well, of course we have always had code review; you would post patches to something like Bugzilla or a mailing list and the maintainer would review them and commit them, ask for a revision, or reject them (or, if the maintainer was Linus Torvalds, reject them and tell you to kill yourself.)

    But maintainers just wrote patches and committed them, and didn’t have to review them! They were maintainers because we trusted them absolutely to write bug-free code, right?2 Sure, it may be that maintainers committed patches with mistakes sometimes, but those could hardly have been avoided. If you made avoidable mistakes in your patches, you didn’t get to be a maintainer, or if you did somehow get to be a maintainer then you were a bad one and you would probably run your project into the ground.

    Somewhere along the line we got this idea that every patch should be reviewed, even if it was written by a maintainer. The reason is not because we want to enable maintainers who make mistakes all the time! Rather, because we recognize that even the most excellent maintainers do make mistakes, it’s just part of being human. And even if your patch doesn’t have a mistake, another pair of eyes can sometimes help you take it to the next level of elegance.

    Some people complained: it’s bureaucratic! it’s for Agile weenies! really excellent developers will not tolerate it and will leave! etc. Some even still believe this. But even our tools have evolved over time to expect code review — you could argue that the foundational premise of the GitHub UI is code review! — and the perspective has shifted in our community so that code review is now a best practice, and what do you know, our code has gotten better, not worse. Maintainers who can’t handle having their code reviewed by others are rare these days.

    By the way, it may not seem like such a big deal now that it’s been around for a while, but code review can be really threatening if you aren’t used to it. It’s not easy to watch your work be critiqued, and it brings out a fight-or-flight response in the best of us, until it becomes part of our routine. Even Albert Einstein famously wrote scornfully to a journal editor after a reviewer had pointed out a mistake in his paper, that he had sent the manuscript for publication, not for review.

    And now imagine a future where we could say…

    It used to be that we treated each other like crap in the free software world.

    Well, of course we didn’t always treat each other like crap; you would submit patches and sometimes they would be gratefully accepted, but other times Linus Torvalds would tell you to kill yourself.

    But maintainers did it all in the name of technical excellence! They were maintainers because we trusted them absolutely to be objective, right? Sure, it may be that patches by people who didn’t fit the “programmer” stereotype were flamed more often, and it may be that people got sick of the disrespect and left free software entirely, but the maintainers were purely objectively looking at technical excellence. If you weren’t purely objective, you didn’t get to be a maintainer, or if you somehow did get to be a maintainer then you were a bad one and you would probably run your project into the ground.

    Somewhere along the line we got this idea that contributors should be treated with respect and not driven away from projects, even if the maintainer didn’t agree with their patches. The reason is not because we want to force maintainers to be less objective about technical excellence! Rather, because we recognize that even the most objective maintainers do suffer from biases, it’s just part of being human. And even if someone’s patch is objectively bad, treating them nonetheless with respect can help ensure they will stick around, contribute their perspectives which may be different from yours, and rise to a maintainer’s level of competence in the future.

    Some people complained: it’s dishonest! it’s for politically correct weenies! really excellent developers will not tolerate it and will leave! etc. Some even still believe this. But the perspective has shifted in our community so that respect is now a best practice, and what do you know, our code (and our communities) have gotten better, not worse. Maintainers who can’t handle treating people respectfully are rare these days.

    By the way, it may not seem like such a big deal now that it’s been around for a while, but confronting and acknowledging your own biases can be really threatening if you aren’t used to it… I think by now you get the idea.

    Conclusion, and a note for the choir

    I generally try not to preach to the choir anymore, and leave that instead to others. So if you are in the choir, you are not the audience for this post. I’m hoping, possibly vainly, that this actually might convince someone to think differently about meritocracy, and consider this a bug report.

    But here’s a small note for us in the choir: I believe we are not doing ourselves any favours by framing respectful behaviour as the opposite of meritocracy, and I think that’s part of why the pro-disrespect camp have such a strong reaction against it. I understand why the jargon developed that way: those driven away by the current, flawed, implementation of meritocracy are understandably sick of hearing about how meritocracy works so well, and the term itself has become a bit poisoned.

    If anything, we are simply trying to fix a bug in meritocracy3, so that we get an environment where we really do get the code written by the most technically excellent people, including those who in the current system get driven away by abusive language and behaviour.

    [1] To be clear, I strive to be both nice and technically excellent, and the number of times I’ve been forced to make a tradeoff between those two things is literally zero. But that’s really the whole point of this essay

    [2] A remnant of these bad old days of absolute trust in maintainers, that still persists in GNOME to this day, is that committer privileges are for the whole GNOME project. I can literally commit anything I like, to any repository in gitlab.gnome.org/GNOME, even repositories that I have no idea what they do, or are written in a programming language that I don’t know!

    [3] A point made eloquently by Matthew Garrett several years ago

    RcppAPT 0.0.5

    Planet Debian - Dje, 30/09/2018 - 2:08pd

    A new version of RcppAPT – our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, … commands and their cache powering Debian, Ubuntu and the like – is now on CRAN.

    This version is a bit of experiment. I had asked on the r-package-devel and r-devel list how I could suppress builds on macOS. As it does not have the required libapt-pkg-dev library to support the apt, builds always failed. CRAN managed to not try on Solaris or Fedora, but somewhat macOS would fail. Each. And. Every. Time. Sadly, nobody proposed a working solution.

    So I got tired of this. Now we detect where we build, and if we can infer that it is not a Debian or Ubuntu (or derived system) and no libapt-pkg-dev is found, we no longer fail. Rather, we just set a #define and at compile-time switch to essentially empty code. Et voilà: no more build errors.

    And as before, if you want to use the package to query the system packaging information, build it on system using apt and with its libapt-pkg-dev installed.

    A few other cleanups were made too.

    Changes in version 0.0.5 (2017-09-29)
    • NAMESPACE now sets symbol registration

    • configure checks for suitable system, no longer errors if none found, but sets good/bad define for the build

    • Existing C++ code is now conditional on having a 'good' build system, or else alternate code is used (which succeeds everywhere)

    • Added suitable() returning a boolean with configure result

    • Tests are conditional on suitable() to test good builds

    • The Travis setup was updated

    • The vignette was updated and expanded

    Courtesy of CRANberries, there is also a diffstat report for this release.

    A bit more information about the package is available here as well as as the GitHub repo.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

    Valutakrambod - A python and bitcoin love story

    Planet Debian - Sht, 29/09/2018 - 10:20md

    It would come as no surprise to anyone that I am interested in bitcoins and virtual currencies. I've been keeping an eye on virtual currencies for many years, and it is part of the reason a few months ago, I started writing a python library for collecting currency exchange rates and trade on virtual currency exchanges. I decided to name the end result valutakrambod, which perhaps can be translated to small currency shop.

    The library uses the tornado python library to handle HTTP and websocket connections, and provide a asynchronous system for connecting to and tracking several services. The code is available from github.

    There are two example clients of the library. One is very simple and list every updated buy/sell price received from the various services. This code is started by running bin/btc-rates and call the client code in valutakrambod/client.py. The simple client look like this:

    import functools import tornado.ioloop import valutakrambod class SimpleClient(object): def __init__(self): self.services = [] self.streams = [] pass def newdata(self, service, pair, changed): print("%-15s %s-%s: %8.3f %8.3f" % ( service.servicename(), pair[0], pair[1], service.rates[pair]['ask'], service.rates[pair]['bid']) ) async def refresh(self, service): await service.fetchRates(service.wantedpairs) def run(self): self.ioloop = tornado.ioloop.IOLoop.current() self.services = valutakrambod.service.knownServices() for e in self.services: service = e() service.subscribe(self.newdata) stream = service.websocket() if stream: self.streams.append(stream) else: # Fetch information from non-streaming services immediately self.ioloop.call_later(len(self.services), functools.partial(self.refresh, service)) # as well as regularly service.periodicUpdate(60) for stream in self.streams: stream.connect() try: self.ioloop.start() except KeyboardInterrupt: print("Interrupted by keyboard, closing all connections.") pass for stream in self.streams: stream.close()

    The library client loops over all known "public" services, initialises it, subscribes to any updates from the service, checks and activates websocket streaming if the service provide it, and if no streaming is supported, fetches information from the service and sets up a periodic update every 60 seconds. The output from this client can look like this:

    Bl3p BTC-EUR: 5687.110 5653.690 Bl3p BTC-EUR: 5687.110 5653.690 Bl3p BTC-EUR: 5687.110 5653.690 Hitbtc BTC-USD: 6594.560 6593.690 Hitbtc BTC-USD: 6594.560 6593.690 Bl3p BTC-EUR: 5687.110 5653.690 Hitbtc BTC-USD: 6594.570 6593.690 Bitstamp EUR-USD: 1.159 1.154 Hitbtc BTC-USD: 6594.570 6593.690 Hitbtc BTC-USD: 6594.580 6593.690 Hitbtc BTC-USD: 6594.580 6593.690 Hitbtc BTC-USD: 6594.580 6593.690 Bl3p BTC-EUR: 5687.110 5653.690 Paymium BTC-EUR: 5680.000 5620.240

    The exchange order book is tracked in addition to the best buy/sell price, for those that need to know the details.

    The other example client is focusing on providing a curses view with updated buy/sell prices as soon as they are received from the services. This code is located in bin/btc-rates-curses and activated by using the '-c' argument. Without the argument the "curses" output is printed without using curses, which is useful for debugging. The curses view look like this:

    Name Pair Bid Ask Spr Ftcd Age BitcoinsNorway BTCEUR 5591.8400 5711.0800 2.1% 16 nan 60 Bitfinex BTCEUR 5671.0000 5671.2000 0.0% 16 22 59 Bitmynt BTCEUR 5580.8000 5807.5200 3.9% 16 41 60 Bitpay BTCEUR 5663.2700 nan nan% 15 nan 60 Bitstamp BTCEUR 5664.8400 5676.5300 0.2% 0 1 1 Bl3p BTCEUR 5653.6900 5684.9400 0.5% 0 nan 19 Coinbase BTCEUR 5600.8200 5714.9000 2.0% 15 nan nan Kraken BTCEUR 5670.1000 5670.2000 0.0% 14 17 60 Paymium BTCEUR 5620.0600 5680.0000 1.1% 1 7515 nan BitcoinsNorway BTCNOK 52898.9700 54034.6100 2.1% 16 nan 60 Bitmynt BTCNOK 52960.3200 54031.1900 2.0% 16 41 60 Bitpay BTCNOK 53477.7833 nan nan% 16 nan 60 Coinbase BTCNOK 52990.3500 54063.0600 2.0% 15 nan nan MiraiEx BTCNOK 52856.5300 54100.6000 2.3% 16 nan nan BitcoinsNorway BTCUSD 6495.5300 6631.5400 2.1% 16 nan 60 Bitfinex BTCUSD 6590.6000 6590.7000 0.0% 16 23 57 Bitpay BTCUSD 6564.1300 nan nan% 15 nan 60 Bitstamp BTCUSD 6561.1400 6565.6200 0.1% 0 2 1 Coinbase BTCUSD 6504.0600 6635.9700 2.0% 14 nan 117 Gemini BTCUSD 6567.1300 6573.0700 0.1% 16 89 nan Hitbtc+BTCUSD 6592.6200 6594.2100 0.0% 0 0 0 Kraken BTCUSD 6565.2000 6570.9000 0.1% 15 17 58 Exchangerates EURNOK 9.4665 9.4665 0.0% 16 107789 nan Norgesbank EURNOK 9.4665 9.4665 0.0% 16 107789 nan Bitstamp EURUSD 1.1537 1.1593 0.5% 4 5 1 Exchangerates EURUSD 1.1576 1.1576 0.0% 16 107789 nan BitcoinsNorway LTCEUR 1.0000 49.0000 98.0% 16 nan nan BitcoinsNorway LTCNOK 492.4800 503.7500 2.2% 16 nan 60 BitcoinsNorway LTCUSD 1.0221 49.0000 97.9% 15 nan nan Norgesbank USDNOK 8.1777 8.1777 0.0% 16 107789 nan

    The code for this client is too complex for a simple blog post, so you will have to check out the git repository to figure out how it work. What I can tell is how the three last numbers on each line should be interpreted. The first is how many seconds ago information was received from the service. The second is how long ago, according to the service, the provided information was updated. The last is an estimate on how often the buy/sell values change.

    If you find this library useful, or would like to improve it, I would love to hear from you. Note that for some of the services I've implemented a trading API. It might be the topic of a future blog post.

    As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

    Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

    Pulling back

    Planet Debian - Sht, 29/09/2018 - 6:15md

    I've updated my fork of the monkey programming language to allow object-based method calls.

    That's allowed me to move some of my "standard-library" code into Monkey, and out of Go which is neat. This is a simple example:

    // // Reverse a string, // function string.reverse() { let r= ""; let l = len(self); for( l > 0 ) { r += self[l-1]; l--; } return r; }

    Usage is the obvious:

    puts( "Steve".reverse() );

    Or:

    let s = "Input"; s = s.reverse(); puts( s + "\n" );

    Most of the pain here was updating the parser to recognize that "." meant a method-call was happening, once that was done it was otherwise only a matter of passing the implicit self object to the appropriate functions.

    This work was done in a couple of 30-60 minute chunks. I find that I'm only really able to commit to that amount of work these days, so I've started to pull back from other projects.

    Oiva is now 21 months old and he sucks up all my time & energy. I can't complain, but equally I can't really start concentrating on longer-projects even when he's gone to sleep.

    And that concludes my news for the day.

    Goodnight dune..

    Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

    Michael Meeks: 2018-09-29 Saturday

    Planet GNOME - Sht, 29/09/2018 - 6:07md
    • Taxi to the airport; flight home - wrote blog; added stats to ESC minutes, worked on mail. Kindly picked up by J. lovely to see her, H&M again; home, relaxed.

    Self-plotting output from feedgnuplot and python-gnuplotlib

    Planet Debian - Sht, 29/09/2018 - 1:40md

    I just made a small update to feedgnuplot (version 1.51) and to python-gnuplotlib (version 0.23). Demo:

    $ seq 5 | feedgnuplot --hardcopy showplot.gp $ ./showplot.gp [plot pops up] $ cat showplot.gp #!/usr/bin/gnuplot set grid set boxwidth 1 histbin(x) = 1 * floor(0.5 + x/1) plot '-' notitle 1 1 2 2 3 3 4 4 5 5 e pause mouse close

    I.e. there's now support for a fake gp terminal that's not a gnuplot terminal at all, but rather a way to produce a self-executable gnuplot script. 99% of this was already implemented in --dump, but this way to access that functionality is much nicer. In fact, the machine running feedgnuplot doesn't even need to have gnuplot installed at all. I needed this because I was making complicated plots on a remote box, and X-forwarding was being way too slow. Now the remote box creates the self-plotting gnuplot scripts, I scp those, evaluate them locally, and then work with interactive visualizations.

    The python frontend gnuplotlib has received an analogous update.

    Dima Kogan http://notes.secretsauce.net Dima Kogan

    Jim Hall: Open source tools I used to write my latest book

    Planet GNOME - Pre, 28/09/2018 - 11:46md
    I first used and contributed to Free software and open source software in 1993, and since then I've been an open source software developer and evangelist. I've written or contributed to dozens of open source software projects, although the one that I'll be remembered for is the FreeDOS Project, an open source implementation of the DOS operating system.

    I recently wrote a book about FreeDOS. Using FreeDOS is my celebration of the 24th anniversary of FreeDOS. This is a collection of how-tos about installing and using FreeDOS, essays about my favorite DOS applications, and quick-reference guides to the DOS command line and DOS batch programming. I've been working on this book for the last few months, with the help of a great professional editor.

    Using FreeDOS is available under the Creative Commons Attribution (cc-by) International Public License. You can download the EPUB and PDF versions at no charge from the FreeDOS e-books website. (There's also a print version, for those who prefer a bound copy.)

    The book was produced almost entirely with open source software. I'd like to share a brief insight into the tools I used to create, edit, and produce Using FreeDOS.

    Google Docs
    This was the only tool that wasn't open source software. I uploaded my first drafts to Google Docs so my editor and I could collaborate. I'm sure there are open source collaboration tools, but the ability for two people to edit the same document at the same time, comments, edit suggestions, change tracking—not to mention the use of paragraph styles and the ability to download the finished document—made Google Docs a valuable part of the editing process.LibreOffice
    I started on LibreOffice 6.0 but I finished the book using LibreOffice 6.1. I love LibreOffice's rich support of styles. Paragraph styles made it really easy to apply a style for titles, headers, body text, sample code, and other text. Character styles let me modify the appearance of text within a paragraph, such as inline sample code or a different style to indicate a filename. Graphics styles let me apply certain styling to screenshots and other images. And page styles allowed me to easily modify the layout and appearance of the page.GIMP
    My book includes a lot of DOS program screenshots, website screenshots, and FreeDOS logos. I used the GIMP to modify these images for the book. Usually this was simply cropping or resizing an image, but as I prepare the print edition of the book, I'm using the GIMP to create a few images that will be simpler for print layout.Inkscape
    Most of the FreeDOS logos and fish mascots are in SVG format, and I used Inkscape for any image tweaking here. And in preparing the PDF version of the ebook, I wanted a simple blue banner at top of the page, with the FreeDOS logo in the corner. After some experimenting, I found it easier to create an SVG image in Inkscape that looked like the banner I wanted, and pasted that into the header.ImageMagick
    While it's great to use GIMP to do the fine work, sometimes it's just faster to run an ImageMagick command over a set of images, such as to convert into PNG format or to resize images.Sigil
    LibreOffice can export directly to EPUB format, but it wasn't a great transfer. I haven't tried creating an EPUB with LibreOffice 6.1, but LibreOffice 6.0 didn't include my images. It also added styles in a weird way. I used Sigil to tweak the EPUB file and make everything look right. Sigil even has a preview function so you can see what the EPUB will look like.QEMU
    Because this book is about installing and running FreeDOS, I needed to actually run FreeDOS. You can boot FreeDOS inside any PC emulator, including VirtualBox, QEMU, GNOME Boxes, PCem, Bochs. But I like the simplicity of QEMU. And the QEMU console lets you issue a screendump in PPM format, which is ideal for grabbing screenshots to include in the book.
    And of course, I have to mention running GNOME on Linux. I use the Fedora distribution of Linux. This article originally appeared on Opensource.com as 6 open source tools for writing a book.

    MicroDebConf Brasília 2018

    Planet Debian - Pre, 28/09/2018 - 11:20md

    After I came back to my home city (Brasília) I felt the necessity to promote and help people to contribute to Debian, some old friends from my former university (Univesrity of Brasília) and the local comunnity (Debian Brasília) came up with the idea to run a Debian related event and I just thought: “That sounds amazing!”. We contacted the university to book a small auditorium there for an entire day. After that we started to think, how should we name the event? The Debian Day was more or less one month ago, someone speculated a MiniDebConf but I thought that it was going to be much smaller than regular MiniDebConfs. So we decided to use a term that we used sometime ago here in Brasília, we called MicroDebConf :)

    MicroDebConf Brasília 2018 took place at Gama campus of University of Brasília on September 8th. It was amazing, we gathered a lot of students from university and some high schools, and some free software enthisiastics too. We had 44 attendees in total, we did not expect all these people in the begining! During the day we presented to them what is Debian Project and the many different ways to contribute to it.

    Since our focus was newcommers we started from the begining explaining how to use Debian properly, how to interact with the community and how to contribute. We also introduced them to some other subjects such as management of PGP keys, network setup with Debian and some topics about Linux kernel contributions. As you probably know, students are never satisfied, sometimes the talks are too easy and basic and other times are too hard and complex to follow. Then we decided to balance the talks level, we started from Debian basics and went over details of Linux kernel implementation. Their feedback was positive, so I think that we should do it again, atract students is always a challenge.

    In the end of the day we had some discussions regarding what should we do to grow our local community? We want more local people actually contributing to free software projects and specially Debian. A lot of people were interested but some of them said that they need some guidance, the life of a newcommer is not so easy for now.

    After some discussion we came up with the idea of a study group about Debian packaging, we will schedule meetings every week (or two weeks, not decided yet), and during these meetings we will present about packaging (good practices, tooling and anything that people need) and do some hands-on work. My intention is document everything that we will do to facilitate the life of future newcommers that wants to do Debian packaging. My main reference for this study groups has been LKCamp, they are a more consolidated group and their focus is to help people start contributing to Linux kernel.

    In my opinion, this kind of initiative could help us on bring new blood to the project and disseminate the free software ideas/culture. Other idea that we have is to promote Debian and free software in general to non technical people. We realized that we need to reach these people if we want a broader community, we do not know how exactly yet but it is in our radar.

    After all these talks and discussions we needed some time to relax, and we did that together! We went to a bar and got some beer (except people with less than 18 years old :) and food. Of course that ours discussions about free software kept running all night long.

    The following is an overview about this conference:

    • We probably defined this term and are the first organizing a MicroDebConf (we already did it in 2015). We should promote more this kind of local events

    • I guess we inspired a lot of young people to contribute to Debian (and free software in general)

    • We defined a way to help local people starting contributing to Debian with packaging. I really like this idea of a study group, meet people in person is always the best way to create bonds

    • Now we hopefully will have a stronger Debian community in Brasília - Brazil \o/

    Last but not least, I would like to thank LAPPIS (a research lab which I was part in my undergrad), they helped us with all the logistics and buroucracies, and Collabora for the coffee break sponsorship! Collabora, LAPPIS and us share the same goal: promote FLOSS to all these young people and make our commuity grow!

    Lucas Kanashiro http://blog.kanashiro.xyz/ Lucas Kanashiro’s blog

    Michael Meeks: 2018-09-28 Friday

    Planet GNOME - Pre, 28/09/2018 - 11:00md
    • Up lateish, lots of hallway-track conversations in the sun. Spoke to a set of local engineering students with Eike - about Solving arbitrary engineering problems - hopefully helpful for them: lots of good questions.
    • Back for the closing session; out for a team meal together - lots of good Italian food, and company. To a cocktail bar afterwards. Bid 'bye to all - a great conference. Finally got time to review & sign-off on CP's 2017 accounts.

    Faqet

    Subscribe to AlbLinux agreguesi