You are here

Agreguesi i feed

Google Summer of Code 2018 with Debian - Week 4

Planet Debian - Hën, 11/06/2018 - 8:30md

After working on designs and getting my hands dirty with KIVY for the first 3 weeks, I became comfortable with my development environment and was able to deliver features within a couple of days with UI, tests, and documentation. In this blog, I explain how I converted all my Designs into Code and what I've learned along the way.

The Sign Up

In order to implement above design in KIVY, the best way is to write a user kv-lang. It involves writing a kv file which contains widget tree of the layout and a lot more. One can learn more about kv-lang from the documentation. To begin with, let us look at the simplest kv file.

BoxLayout: Label: text: 'Hello' Label: text: 'World' KV Language

In KIVY, in order to build UI widgets are used. Also, widget base class is what is derived to create all other UI elements like layouts, button, label and so on in KIVY. Indentation is used in kv just like in Python to define children. In our kv file above, we're using BoxLayout which allows us to arrange all its children in either horizontal(by default) or vertical orientation. So, both the Labels will be oriented horizontally one after another.

Just like children widgets, one can also set values to properties like Hello to text of the first Label in above code. More information about what properties can be defined for BoxLayout and Label can be seen from their API documentaion. All which remains is importing this .kv (say sample.kv) file from your module which runs KIVY app. You might notice that for now Language and Timezone are kept static. The reason is, Language support architecture is yet to be finalized and both the options would require a Drop Down list, design and implementation for which will be handled separately.

In order for me to build the UI following the design, I had to experiment with widgets. When all was done, signup.kv file contained the resultant UI.

Validations

Now, the good part is we have a UI, the user can input data. And the bad part is user can input any data! So, it's very important to validate whether the user is submitting data in the correct format or not. Specifically for Sign Up module, I had to validate Email, Passwords and Full Name submitted by the user. Validation module can be found here which contains classes and methods for what I intended to do.

It's important that user gets feedback after validation if something is wrong with the input. This is done by exchanging the Label's text with error message and color with bleeding red by calling prompt_error_message for unsuccessful validation.

Updating The Database

After successful validation, Sign Up module steps forward to update the database in sqlite3 module. But before that, Email and Full Name is cleaned for any unnecessary whitespaces, tabs and newline characters. Universally unique identifier or uuid is generated for the user_id. Plain text Password in changed to sha256 hash string for security. Finally, sqlite3 is integrated to updatedb.py to update the database. SQlite database is stored in a single file and named new_contributor_wizard.db. For user information, the table named USERS is created if not present during initialization of UpdateDB instance. Finally, information is stored or error is returned if the Email already exists. This is how the USERS schema looks like.

id VARCHAR(36) PRIMARY KEY, email UNIQUE, pass VARCHAR(64), fullname TEXT, language TEXT, timezone TEXT

After the Database is updated, i.e. successful account creation of user, the natural flow is to take the user to the Dashboard screen. In order to make this feature atomic, integration with Dashboard would be done once all 3 (SignUp, SignIn, and Dashboard) features are merged. So, in order to showcase successful sign-up, I've used text confirmation. Below is the screencast of how the feature looks and what changes it makes in the database.

Your browser does not support HTML5 video. The Sign In

If you look into the difference in UI of SignIn module in comparison with the SignUp, you might notice a few changes.

  • The New Contributor Wizard is now right-aligned
  • Instead of 2 columns taking user information, here we have just one with Email and Password

Hence, the UI experiences only a little change and the result can be seen in singin.py.

Validations

Just like in the Sign Up modules, we are not trusting user's input to be sane. Hence, we validate whether the user is giving us a good format Email and Password. The resultant validations of Sign In modules can be seen in validations.py.

Updating The Database

After successful validation, next step would be cleaning Email and hashing the Password entered by the user. Here we have two possibilities of unsuccessful signin,

  • Either the Email entered by the user doesn't exist in the database
  • Or the Password entered by the user is not correct

Else, the user is signed in successfully. For the unsuccessful signin, I have created a exceptions.py module to prompt the error correctly. updatedb.py contains the database operations for Sign In module.

The Exceptions

Exceptions.py of Sign In contains Exception classes and they are defined as

  • UserError - this class is used to throw an exception when Email doesn't exist
  • PasswordError - this class is used to throw an exception when Password doesn't match the one saved in the database with the corresponding email.

All these modules are integrated with signin.py and the resultant feature can be seen in action in the screencast below. Also, here's the merge request for the same.

Your browser does not support HTML5 video. The Dashboard

The Dashboard is completely different than the above two modules. If New Contributor Wizard is the culmination of different user stories and interactive screen then Dashboard is the protagonist of all the other features. A successful SignIn or SignUp will direct the user to the Dashboard. All the tutorials and tools will be available to the user henceforth.

The UI

There are 2 segments of the Dashboard screen, one is for all the menu options on the left and another is for the tutorials and tools for the selected menu option on the right. So, it was needed to change the screen on the right all the time while selecting the menu options. KIVY provides a widget named Screen Manager to manage such an issue gracefully. But in order to have control over the transition of just a part of the screen rather than the entire screen, one has to dig deep into the API and work it out. Here's when I remembered a sentence from the Zen of Python, "Simple is better than complex" and I chose the simple way of changing the screen i.e. by adding/removing widget functions.

In the dashboard.py, I'm overidding on_touch_down function to check which menu option the user clicks on and calling enable_menu accordingly.

The menu options on the left are not the Button widget. I had an option of using the Button directly but it would need customization to make them look pretty. Instead, I used BoxLayout and Label to incorporate a button like feature. In enable_menu I only check on top of which option user is clicking using the touch API. Now, all I have to do is highlight the selected option and unfocus all the other options. The final UI can be seen here in dashboard.kv.

Courseware

Along with highlighting the selected option, Dashboard also changes to the courseware i.e. tools and tutorials for the selected option on the right. To provide a modular structure to application, all these options are build as separate modules and then integrated into the Dashboard. Here are all the modules for the courseware build for the Dashboard,

  • blog - Users will be given tools to create and deploy their blogs and also learn the best practices.
  • cli - Understanding Command Line Interface will be the goal with all the tutorials provided in this module.
  • communication - Communication module will have tutorials for IRC and mailing lists and showcase best communication practices. The tools in this module will help user subscribe to the mailing lists of different open source communities.
  • encryption - Encrypting communication and data will be tough using this module.
  • how_to_use - This would be an introductory module for the user for them to understand how to user this application.
  • vcs - Version Control Systems like git is important while working on a project whether personal or with a team and everything in between.
  • way_ahead - This module will help users reach out to different open source communities and organizations. It will also showcase open source project to the user with respect to their preference and information about programs like Google Summer of Code and Outreachy.
Settings

Below the menu are the options for settings. These settings also have separate modules just like courseware. Specifically, they are described as

  • application_settings - Would help out user to manage setting which are specific to KIVY application like resolutions.
  • theme_settings - User can manage theme related setting like color schema using this option
  • profile_settings - Would help the user manage information about themselves

The merge request which incorporates the Dashboard feature in the project can be seen in action in the screencast below.

Your browser does not support HTML5 video. The Conclusion

The week 4 was a bit satisfying for me as I felt like adding value to the project with these merge requests. As soon as the merge requests are reviewed and merged in the repository, I'll work on integrating all these features together to create a seamless experience as it should be for the user. There are few necessary modifications to be made in the features like supporting multiple languages and adding the gradient to the background as it can be seen in the design. I'll create issues on redmine for the same and will work on them as soon as integration is done. My next task would be designing how tutorials and tasks would look in the right segment of the Dashboard.

Shashank Kumar https://blog.shanky.xyz/ Shanky's Brainchild

Microsoft’s failed attempt on Debian packaging

Planet Debian - Hën, 11/06/2018 - 11:13pd

Just recently Microsoft Open R 3.5 was announced, as an open source implementation of R with some improvements. Binaries are available for Windows, Mac, and Linux. I dared to download and play around with the files, only to get shocked how incompetent Microsoft is in packaging.

From the microsoft-r-open-mro-3.5.0 postinstall script:

#!/bin/bash #TODO: Avoid hard code VERSION number in all scripts VERSION=`echo $DPKG_MAINTSCRIPT_PACKAGE | sed 's/[[:alpha:]|(|[:space:]]//g' | sed 's/\-*//' | awk -F. '{print $1 "." $2 "." $3}'` INSTALL_PREFIX="/opt/microsoft/ropen/${VERSION}" echo $VERSION ln -s "${INSTALL_PREFIX}/lib64/R/bin/R" /usr/bin/R ln -s "${INSTALL_PREFIX}/lib64/R/bin/Rscript" /usr/bin/Rscript rm /bin/sh ln -s /bin/bash /bin/sh

First of all, the ln -s will not work in case the standard R package is installed, but much worse, forcibly relinking /bin/sh to bash is something I didn’t expect to see.

Then, looking at the prerm script, it is getting even more funny:

#!/bin/bash VERSION=`echo $DPKG_MAINTSCRIPT_PACKAGE | sed 's/[[:alpha:]|(|[:space:]]//g' | sed 's/\-*//' | awk -F. '{print $1 "." $2 "." $3}'` INSTALL_PREFIX="/opt/microsoft/ropen/${VERSION}/" rm /usr/bin/R rm /usr/bin/Rscript rm -rf "${INSTALL_PREFIX}/lib64/R/backup"

Stop, wait, you are removing /usr/bin/R without even checking that it points to the R you have installed???

I guess Microsoft should read a bit up, in particular about dpkg-divert and proper packaging. What came in here was such an exhibition of incompetence that I can only assume they are doing it on purpose.

PostScriptum: A short look into the man page of dpkg-divert will give a nice example how it should be done.

PPS: I first reported these problems in the R Open Forums and later got an answer that they look into it.

Norbert Preining https://www.preining.info/blog There and back again

Running Digikam inside Docker

Planet Debian - Hën, 11/06/2018 - 9:35pd

After my recent complaint about AppImage, I thought I’d describe how I solved my problem. I needed a small patch to Digikam, which was already in Debian’s 5.9.0 package, and the thought of rebuilding the AppImage was… unpleasant.

I thought – why not just run it inside Buster in Docker? There are various sources on the Internet for X11 apps in Docker. It took a little twiddling to make it work, but I did.

My Dockerfile was pretty simple:

FROM debian:buster MAINTAINER John Goerzen RUN apt-get update && \ apt-get -yu dist-upgrade && \ apt-get --install-recommends -y install firefox-esr digikam digikam-doc \ ffmpegthumbs imagemagick minidlna hugin enblend enfuse minidlna pulseaudio \ strace xterm less breeze && \ apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* RUN adduser --disabled-password --uid 1000 --gecos "John Goerzen" jgoerzen && \ rm -r /home/jgoerzen/.[a-z]* RUN rm /etc/machine-id CMD /usr/bin/docker RUN mkdir -p /nfs/personalmedia /run/user/1000 && chown -R jgoerzen:jgoerzen /nfs /run/user/1000

I basically create the container and my account in it.

Then this script starts up Digikam:

#!/bin/bash set -e # This will be unnecessary with docker 18.04 theoretically.... --privileged see # https://stackoverflow.com/questions/48995826/which-capabilities-are-needed-for-statx-to-stop-giving-eperm # and https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1755250 docker run -ti \ -v /tmp/.X11-unix:/tmp/.X11-unix -v "/run/user/1000/pulse:/run/user/1000/pulse" -v /etc/machine-id:/etc/machine-id \ -v /etc/localtime:/etc/localtime \ -v /dev/shm:/dev/shm -v /var/lib/dbus:/var/lib/dbus -v /var/run/dbus:/var/run/dbus -v /run/user/1000/bus:/run/user/1000/bus \ -v "$HOME:$HOME" -v "/nfs/personalmedia/Pictures:/nfs/personalmedia/Pictures" \ -e DISPLAY="$DISPLAY" \ -e XDG_RUNTIME_DIR="$XDG_RUNTIME_DIR" \ -e DBUS_SESSION_BUS_ADDRESS="$DBUS_SESSION_BUS_ADDRESS" \ -e LANG="$LANG" \ --user "$USER" \ --hostname=digikam \ --name=digikam \ --privileged \ --rm \ jgoerzen/digikam "$@" /usr/bin/digikam

The goal here was not total security isolation; if it had been, then all the dbus mounting and $HOME mounting was a poor idea. But as an alternative to AppImage — well, it worked perfectly. I could even get security updates if I wanted.

John Goerzen http://changelog.complete.org The Changelog

Weblate 3.0.1

Planet Debian - Dje, 10/06/2018 - 10:15md

Weblate 3.0.1 has been released today. It contains several bug fixes, most importantly possible migration issue on users when migrating from 2.20. There was no data corruption, just some of the foreign keys were possibly not properly migrated. Upgrading from 3.0 to 3.0.1 will fix this as well as going directly from 2.20 to 3.0.1.

Full list of changes:

  • Fixed possible migration issue from 2.20.
  • Localization updates.
  • Removed obsolete hook examples.
  • Improved caching documentation.
  • Fixed displaying of admin documentation.
  • Improved handling of long language names.

If you are upgrading from older version, please follow our upgrading instructions, the upgrade is more complex this time.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Michal Čihař https://blog.cihar.com/archives/debian/ Michal Čihař's Weblog, posts tagged by Debian

RcppZiggurat 0.1.5

Planet Debian - Dje, 10/06/2018 - 8:27md

A maintenance release 0.1.5 of RcppZiggurat is now on the CRAN network for R.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

Per a request from CRAN, we changed the vignette to accomodate pandoc 2.* just as we did with the most recent pinp release two days ago. No other changes were made. Other changes that have been pending are a minor rewrite of DOIs in DESCRIPTION, a corrected state setter thanks to a PR by Ralf Stubner, and a tweak for function registration to have user_norm_rand() visible.

The NEWS file entry below lists all changes.

Changes in version 0.1.5 (2018-06-10)
  • Description rewritten using doi for references.

  • Re-setting the Ziggurat generator seed now correctly re-sets state (Ralf Stubner in #7 fixing #3)

  • Dynamic registration reverts to manual mode so that user_norm_rand() is visible as well (#7).

  • The vignette was updated to accomodate pandoc 2* [CRAN request].

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

RcppGSL 0.3.6

Planet Debian - Dje, 10/06/2018 - 8:20md

A maintenance update 0.3.6 of RcppGSL is now on CRAN. The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

Per a request from CRAN, we changed the vignette to accomodate pandoc 2.* just as we did with the most recent pinp release two days ago. No other changes were made. The (this time really boring) NEWS file entry follows:

Changes in version 0.3.6 (2018-06-10)
  • The vignette was updated to accomodate pandoc 2* [CRAN request].

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

RcppClassic 0.9.10

Planet Debian - Dje, 10/06/2018 - 6:36md

A maintenance release RcppClassic 0.9.9 is now at CRAN. This package provides a maintained version of the otherwise deprecated first Rcpp API; no new projects should use it.

Per a request from CRAN, we changed the vignette to accomodate pandoc 2.* just as we did with the most recent pinp release two days ago. No other changes were made.

CRANberries also reports the changes relative to the previous release.

Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Debian LTS work, May 2018

Planet Debian - Dje, 10/06/2018 - 5:05md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and worked all those hours.

I uploaded the pending changes to linux at the beginning of the month, one of which had been embargoed. I prepared and released another update to the Linux 3.2 longterm stable branch (3.2.102). I then made a final upload of linux based on that.

Ben Hutchings https://www.decadent.org.uk/ben/blog Better living through software

Please stop making the library situation worse with attempts to fix it

Planet Debian - Dje, 10/06/2018 - 10:31pd

I recently had a simple-sounding desire. I would like to run the latest stable version of Digikam. My desktop, however, runs Debian stable, which has 5.3.0, not 5.9.0.

This is not such a simple proposition.


$ ldd /usr/bin/digikam | wc -l
396

And many of those were required at versions that weren’t in stable.

I had long thought that AppImage was a rather bad idea, but I decided to give it a shot. I realized it was worse than I had thought.

The problems with AppImage

About a year ago, I wrote about the problems Docker security. I go into much more detail there, but the summary for AppImage is quite similar. How can I trust all the components in the (for instance) Digikam AppImage image are being kept secure? Are they using the latest libssl and libpng, to avoid security issues? How will I get notified of a security update? (There seems to be no mechanism for this right now.) An AppImage user that wants to be secure has to manually answer every one of those questions for every application. Ugh.

Nevertheless, the call of better facial detection beckoned, and I downloaded the Digikam AppImage and gave it a whirl. The darn thing actually fired up. But when it would play videos, there was no sound. Hmmmm.

I found errors like this:

Cannot access file ././/share/alsa/alsa.conf

Nasty. I spent quite some time trying to make ALSA work, before a bunch of experimentation showed that if I ran alsoft-conf on the host, and selected only the PulseAudio backend, then it would work. I reported this bug to Digikam.

Then I thought it was working — until I tried to upload some photos. It turns out that SSL support in Qt in the AppImage was broken, since it was trying to dlopen an incompatible version of libssl or libcrypto on the host. More details are in the bug I reported about this also.

These are just two examples. In the rather extensive Googling I did about these problems, I came across issue after issue people had with running Digikam in an AppImage. These issues are not limited to the ALSA and SSL issues I describe here. And they are not occurring due to some lack of skill on the part of Digikam developers.

Rather, they’re occurring because AppImage packaging for a complex package like this is hard. It’s hard because it’s based on a fiction — the fiction that it’s possible to make an AppImage container for a complex desktop application act exactly the same, when the host environment is not exactly the same. Does the host run PulseAudio or ALSA? Where are its libraries stored? How do you talk to dbus?

And it’s not for lack of trying. The scripts to build the Digikam appimage support runs to over 1000 lines of code in the AppImage directory, plus another 1300 lines of code (at least) in CMake files that handle much of the work, and another 3000 lines or so of patches to 3rd-party packages. That’s over 5000 lines of code! By contrast, the Debian packaging for the same version of Digikam, including Debian patches but excluding the changelog and copyright files, amounts to 517 lines. Of course, it is reusing OS packages for the dependencies that were already built, but this amounts to a lot simpler build.

Frankly I don’t believe that AppImage really lives up to its hype. Requiring reinventing a build system and making some dangerous concessions on security for something that doesn’t really work in the end — not good in my book.

The library problem

But of course, AppImage exists for a reason. That reason is that it’s a real pain to deal with so many levels of dependencies in software. Even if we were to compile from source like the old days, and even if it was even compatible with the versions of the dependencies in my OS, that’s still a lot of work. And if I have to build dependencies from source, then I’ve given up automated updates that way too.

There’s a lot of good that ELF has brought us, but I can’t help but think that it wasn’t really designed for a world in which a program links 396 libraries (plus dlopens a few more). Further, this world isn’t the corporate Unix world of the 80s; Open Source developers aren’t big on maintaining backwards compatibility (heck, both the KDE and Qt libraries under digikam have both been entirely rewritten in incompatible ways more than once!) The farther you get from libc, the less people seem to care about backwards compatibility. And really, who can blame volunteers? You want to work on new stuff, not supporting binaries from 5 years ago, right?

I don’t really know what the solution is here. Build-from-source approaches like FreeBSD and Gentoo have plenty of drawbacks too. Is there some grand solution I’m missing? Some effort to improve this situation without throwing out all the security benefits that individually-packaged libraries give us in distros like Debian?

John Goerzen http://changelog.complete.org The Changelog

Hacker Noir developments

Planet Debian - Sht, 09/06/2018 - 8:47md

I've been slowly writing on would-be novel, Hacker Noir. See also my Patreon post. I've just pushed out a new public chapter, Assault, to the public website, and a patron-only chapter to Patreon: "Ambush", where the Team is ambushed, and then something bad happens.

The Assault chapter was hard to write. It's based on something that happened to me earlier this year. The Ambush chapter was much more fun.

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

New chapter of Hacker Noir on Patreon

Planet Debian - Sht, 09/06/2018 - 8:45md

For the 2016 NaNoWriMo I started writing a novel about software development, "Hacker Noir". I didn't finish it during that November, and I still haven't finished it. I had a year long hiatus, due to work and life being stressful, when I didn't write on the novel at all. However, inspired by both the Doctorow method and the Seinfeld method, I have recently started writing again.

I've just published a new chapter. However, unlike last year, I'm publishing it on my Patreon only, for the first month, and only for patrons. Then, next month, I'll be putting that chapter on the book's public site (noir.liw.fi), and another new chapter on Patreon.

I don't expect to make a lot of money, but I am hoping having active supporters will motivate me to keep writing.

I'm writing the first draft of the book. It's likely to be as horrific as every first-time author's first draft is. If you'd like to read it as raw as it gets, please do. Once the first draft is finished, I expect to read it myself, and be horrified, and throw it all away, and start over.

Also, I should go get some training on marketing.

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

RcppDE 0.1.6

Planet Debian - Sht, 09/06/2018 - 7:11md

Another maintenance release, now at version 0.1.6, of our RcppDE package is now on CRAN. It follows the most recent (unblogged, my bad) 0.1.5 release in January 2016 and the 0.1.4 release in September 2015.

RcppDE is a "port" of DEoptim, a popular package for derivative-free optimisation using differential evolution optimization, to C++. By using RcppArmadillo, the code becomes a lot shorter and more legible. Our other main contribution is to leverage some of the excellence we get for free from using Rcpp, in particular the ability to optimise user-supplied compiled objective functions which can make things a lot faster than repeatedly evaluating interpreted objective functions as DEoptim (and, in fairness, just like most other optimisers) does.

That is also what lead to this upload: Kyle Baron noticed an issue when nesting a user-supplied compiled function inside a user-supplied compiled objective function -- and when using the newest Rcpp. This has to do with some cleanups we made for how RNG state is, or is not, set and preserved. Kevin Ushey was (once again) a real trooper here and added a simple class to Rcpp (in what is now the development version 0.12.17.2 available on the Rcpp drat repo) and used that here to (selectively) restore behaviour similarly to what we had in Rcpp (but which created another issue for another project). So all that is good now in all use cases. We also have some other changes contributed by Yi Kang some time ago for both JADE style randomization and some internal tweaks. Some packaging details were updated, and that sums up release 0.1.6.

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Talk about the Debian GNU/Linux riscv64 port at RISC-V workshop

Planet Debian - Pre, 08/06/2018 - 11:20md

About a month ago I attended the RISC-V workshop (conference, congress) co-organised by the Barcelona Supercomputing Center (BSC) and Universitat Politècnica de Catalunya (UPC).

There I presented a talk with the (unimaginative) name of “Debian GNU/Linux Port for RISC-V 64-bit”, talking about the same topic as many other posts of this blog.

There are 2-3 such RISC-V Workshop events per year, one somewhere in Silicon Valley (initially at UC Berkeley, its birthplace) and the others spread around the world.

The demographics of this gathering are quite different to those of planet-debian; the people attending usually know a lot about hardware and often Linux, GNU toolchains and other FOSS, but sometimes very little about the inner workings of FOSS organisations such as Debian. My talk had these demographics as target, so a lot of its content will not teach anything new for most readers of planet-debian.

Still, I know that some readers are interested in parts of this, now that the slides and videos are published, so here it is:

Also very relevant is that they were using Debian (our very own riscv64 port, recently imported into debian-ports infra) in two of the most important hardware demos in the corridors. The rest were mostly embedded distros to showcase FPS games like Quake2, Doom or similar.


All the feedback that I received from many of the attendees about the availability of the port was very positive and they were very enthusiastic, basically saying that they and their teams were really delighted to be able to use Debian to test their different prototypes and designs, and to drive development.

Also, many used Debian daily in their work and research for other purposes, for example a couple of people were proudly showing to me Debian installed on their laptops.

For me, this feedback is a testament of how much of what we do everyday matters to the world out there.


For the historical curiosity, I also presented a similar talk in a previous workshop (2 years back) at CSAIL / MIT.

At that time the port was in a much more incipient state, mostly a proof of concept (for example the toolchain had not even started to be upstreamed). Links:

Manuel A. Fernandez Montecelo https://people.debian.org/~mafm/ Manuel A. Fernandez Montecelo :: Personal Debian page - planet-debian

Recently I'm not writing any code.

Planet Debian - Pre, 08/06/2018 - 10:58md
Recently I'm not writing any code.

Junichi Uekawa http://www.netfort.gr.jp/~dancer/diary/201806.html.en Dancer's daily hackings

Elsevier CiteScore™ missing the top conference in data mining

Planet Debian - Pre, 08/06/2018 - 4:01md

Elsevier Scopus is crap.

It’s really time to abandon Elsevier. German universities canceled their subscriptions. Sweden apparently began now to do so, too. Because Elsevier (and to a lesser extend, other publishers) overcharge universities badly.

Meanwhile, Elsevier still struggles to pretend it offers additional value. For example with the ‘‘horribly incomplete’’ Scopus database. For computer science, Scopus etc. are outright useless.

Elsevier just advertised (spammed) their “CiteScore™ metrics”. “Establishing a new standard for measuring serial citation impact”. Not.

“Powered by Scopus, CiteScore metrics are a comprehensive, current, transparent and “ horribly incomplete for computer science.

An excerpt from Elsevier CiteScore™:

Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

Scopus coverage years:from 2002 to 2003, from 2005 to 2015(coverage discontinued in Scopus)

ACM SIGKDD is the top conference for data mining (there are others like NIPS with more focus in machine learning - I’m referring to the KDD subdomain).

But for Elsevier, it does not seem to be important.

Forget Elsevier. Also forget Thomson Reuter’s ISI Web of Science. It’s just the same publisher-oriented crap.

Communications of the ACM: Research Evaluation For Computer Science

Niklaus Wirth, Turing Award winner, appears for minor papers from indexed publications, not his seminal 1970 Pascal report. Knuth’s milestone book series, with an astounding 15,000 citations in Google Scholar, does not figure. Neither do Knuth’s three articles most frequently cited according to Google.

Yes, if you ask Elsevier or Thomson Reuter’s, Donald Knuth’s “the art of computer programming” does not matter. Because it is not published by Elsevier.

They also ignore the fact that open-access gains importance quickly. Many very influencial papers such as “word2vec” have been published first in the open-access preprint server arXiv. Some never even were published anywhere else.

According to Google Scholar, the top venue for artificial intelligence is arXiv cs.LG, and stat.ML is ranked 5. And the top venue for computational linguistics is arXiv cs.CL. In databases and information systems the top venue WWW publishes via ACM, but using open-access links from their web page. The second, VLDB, operates their own server to publish PVLDB as open-access. And number three is arXiv cs.SI, number five is arXiv cs.DB.

Time to move to open-access, and away from overpriced publishers. If you want your paper to be read and cited, publish open-access and not with expensive walled gardens like Elsevier.

Erich Schubert https://www.vitavonni.de/blog/ Techblogging

My Debian Activities in May 2018

Planet Debian - Pre, 08/06/2018 - 12:51pd

FTP master

This month I accepted 304 packages and rejected 20 uploads. The overall number of packages that got accepted this month was 420.

Debian LTS

This was my forty seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 24.25h. During that time I did LTS uploads of:

    [DLA 1387-1] cups security update for one CVE
    [DLA 1388-1] wireshark security update for 9 CVEs

I continued to work on the bunch of wireshark CVEs and sorted all out that did not affect Jessie or Stretch. At the end I sent my dediff with patches for 20 Jessie CVEs and 38 CVES for Stretch to Moritz so that he could compare them with his own work. Unfortunately he didn’t use all of them.

The CVEs for krb5 were marked as no-dsa by the security team, so there was no upload for Wheezy. Building the package for cups was a bit annoying as the test suite didn’t want to run in the beginning.

I also tested the apache2 package from Roberto twice and let the package do a second round before the final upload.

Last but not least I did a week of frontdesk duties and prepared my new working environment for Jessie LTS and Wheezy ELTS.

Other stuff

During May I did uploads of …

  • libmatthew-java to fix a FTBFS with Java 9 due to a disappearing javah. In the end it resulted in a new upstream version.

I also prepared the next libosmocore transistion by uploading several osmocom packages to experimental. This has to continue in June.

Further I sponsored some glewlwyd packages for Nicolas Mora. He is right on his way to become a Debian Maintainer.

Last but not least I uploaded the new package libterm-readline-ttytter-per, which is needed to bring readline functionality to oysttyer, a command line twitter client.

alteholz http://blog.alteholz.eu blog.alteholz.eu » planetdebian

My Debian Activities in May 2018

Planet Debian - Enj, 07/06/2018 - 10:53md

FTP master

This month I accepted 304 packages and rejected 20 uploads. The overall number of packages that got accepted this month was 420.

Debian LTS

This was my forty seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 24.25h. During that time I did LTS uploads of:

    [DLA 1387-1] cups security update for one CVE
    [DLA 1388-1] wireshark security update for 9 CVEs

I continued to work on the bunch of wireshark CVEs and sorted all out that did not affect Jessie or Stretch. At the end I sent my dediff with patches for 20 Jessie CVEs and 38 CVES for Stretch to Moritz so that he could compare them with his own work. Unfortunately he didn’t use all of them.

The CVEs for krb5 were marked as no-dsa by the security team, so there was no upload for Wheezy. Building the package for cups was a bit annoying as the test suite didn’t want to run in the beginning.

I also tested the apache2 package from Roberto twice and let the package do a second round before the final upload.

Last but not least I did a week of frontdesk duties and prepared my new working environment for Jessie LTS and Wheezy ELTS.

Other stuff

During May I did uploads of …

  • libmatthew-java to fix a FTBFS with Java 9 due to a disappearing javah. In the end it resulted in a new upstream version.

I also prepared the next libosmocore transistion by uploading several osmocom packages to experimental. This has to continue in June.

Further I sponsored some glewlwyd packages for Nicolas Mora. He is right on his way to become a Debian Maintainer.

Last but not least I uploaded the new package libterm-readline-ttytter-per, which is needed to bring readline functionality to oysttyer, a command line twitter client.

alteholz http://blog.alteholz.eu blog.alteholz.eu » planetdebian

The Psion Gemini

Planet Debian - Enj, 07/06/2018 - 3:04md

So, I backed the Gemini and received my shiny new device just a few months after they said that it'd ship, not bad for an indiegogo project! Out of the box, I flashed it, using the non-approved linux flashing tool at that time, and failed to backup the parts that, err, I really didn't want blatted... So within hours I had a new phone that I, err, couldn't make calls on, which was marginally annoying. And the tech preview of Debian wasn't really worth it, as it was fairly much unusable (which was marginally upsetting, but hey) - after a few more hours / days of playing around I got the IMEI number back in to the Gemini and put back on the stock android image. I didn't at this point have working bluetooth or wifi, which was a bit of a pain too, turns out the mac addresses for those are also stored in the nvram (doh!), that's now mostly working through a bit of collaboration with another Gemini owner, my Gemini currently uses the mac addresses from his device... which I'll need to fix in the next month or so, else we'll have a mac address collision, probably.

Overall, it's not a bad machine, the keyboard isn't quite as good as I was hoping for, the phone functionality is not bad once you're on a call, but not great until you're on a call, and I certainly wouldn't use it to replace the Samsung Galaxy S7 Edge that I currently use as my full time phone. It is however really rather useful as a sysadmin tool when you don't want to be lugging a full laptop around with you, the keyboard is better than using the on screen keyboard on the phone, the ssh client is "good enough" to get to what I need, and the terminal font isn't bad. I look forward to seeing where it goes, I'm happy to have been an early backer, as I don't think I'd pay the current retail price for one.

Brett Parker iDunno@sommitrealweird.co.uk The World of SommitRealWeird.

The diameter of German+English

Planet Debian - Mër, 23/05/2018 - 8:30pd

Languages never map directly onto each other. The English word fresh can mean frisch or frech, but frish can also be cool. Jumping from one words to another like this yields entertaining sequences that take you to completely different things. Here is one I came up with:

frechfreshfrishcoolabweisenddismissivewegwerfendtrashingverhauendbangingGeklopfeknocking – …

And I could go on … but how far? So here is a little experiment I ran:

  1. I obtained a German-English dictionary. Conveniently, after registration, you can get dict.cc’s translation file, which is simply a text file with three columns: German, English, Word form.

  2. I wrote a program that takes these words and first canonicalizes them a bit: Removing attributes like [ugs.] [regional], {f}, the to in front of verbs and other embellishment.

  3. I created the undirected, bipartite graph of all these words. This is a pretty big graph – ~750k words in each language, a million edges. A path in this graph is precisely a sequence like the one above.

  4. In this graph, I tried to find a diameter. The diameter of a graph is the longest path between two nodes that you cannot connect with a shorter path.

Because the graph is big (and my code maybe not fully optimized), it ran a few hours, but here it is: The English expression be annoyed by sb. and the German noun Icterus are related by 55 translations. Here is the full list:

  • be annoyed by sb.
  • durch jdn. verärgert sein
  • be vexed with sb.
  • auf jdn. böse sein
  • be angry with sb.
  • jdm. böse sein
  • have a grudge against sb.
  • jdm. grollen
  • bear sb. a grudge
  • jdm. etw. nachtragen
  • hold sth. against sb.
  • jdm. etw. anlasten
  • charge sb. with sth.
  • jdn. mit etw. [Dat.] betrauen
  • entrust sb. with sth.
  • jdm. etw. anvertrauen
  • entrust sth. to sb.
  • jdm. etw. befehlen
  • tell sb. to do sth.
  • jdn. etw. heißen
  • call sb. names
  • jdn. beschimpfen
  • abuse sb.
  • jdn. traktieren
  • pester sb.
  • jdn. belästigen
  • accost sb.
  • jdn. ansprechen
  • address oneself to sb.
  • sich an jdn. wenden
  • approach
  • erreichen
  • hit
  • Treffer
  • direct hit
  • Volltreffer
  • bullseye
  • Hahnenfuß-ähnlicher Wassernabel
  • pennywort
  • Mauer-Zimbelkraut
  • Aaron's beard
  • Großkelchiges Johanniskraut
  • Jerusalem star
  • Austernpflanze
  • goatsbeard
  • Geißbart
  • goatee
  • Ziegenbart
  • buckhorn plantain
  • Breitwegerich / Breit-Wegerich
  • birdseed
  • Acker-Senf / Ackersenf
  • yellows
  • Gelbsucht
  • icterus
  • Icterus

Pretty neat!

So what next?

I could try to obtain an even longer chain by forgetting whether a word is English or German (and lower-casing everything), thus allowing wild jumps like hathuthüttelodge.

Or write a tool where you can enter two arbitrary words and it finds such a path between them, if there exists one. Unfortunately, it seems that the terms of the dict.cc data dump would not allow me to create such a tool as a web site (but maybe I can ask).

Or I could throw in additional languages!

What would you do?

Joachim Breitner mail@joachim-breitner.de nomeata’s mind shares

Home Automation: Graphing MQTT sensor data

Planet Debian - Mar, 22/05/2018 - 11:28md

So I’ve setup a MQTT broker and I’m feeding it temperature data. How do I actually make use of this data? Turns out collectd has an MQTT plugin, so I went about setting it up to record temperature over time.

First problem was that although the plugin supports MQTT/TLS it doesn’t support it for subscriptions until 5.8, so I had to backport the fix to the 5.7.1 packages my main collectd host is running.

The other problem is that collectd is picky about the format it accepts for incoming data. The topic name should be of the format <host>/<plugin>-<plugin_instance>/<type>-<type_instance> and the data is <unixtime>:<value>. I modified my MQTT temperature reporter to publish to collectd/mqtt-host/mqtt/temperature-study, changed the publish line to include the timestamp:

publish.single(pub_topic, str(time.time()) + ':' + str(temp), hostname=Broker, port=8883, auth=auth, tls={})

and added a new collectd user to the Mosquitto configuration:

mosquitto_passwd -b /etc/mosquitto/mosquitto.users collectd collectdpass

And granted it read-only access to the collectd/ prefix via /etc/mosquitto/mosquitto.acl:

user collectd topic read collectd/#

(I also created an mqtt-temp user with write access to that prefix for the Python script to connect to.)

Then, on the collectd host, I created /etc/collectd/collectd.conf.d/mqtt.conf containing:

LoadPlugin mqtt <Plugin "mqtt"> <Subscribe "ha"> Host "mqtt-host" Port "8883" User "collectd" Password "collectdpass" CACert "/etc/ssl/certs/ca-certificates.crt" Topic "collectd/#" </Subscribe> </Plugin>

I had some initial problems when I tried setting CACert to the Let’s Encrypt certificate; it actually wants to point to the “DST Root CA X3” certificate that signs that. Or using the full set of installed root certificates as I’ve done works too. Of course the errors you get back are just of the form:

collectd[8853]: mqtt plugin: mosquitto_loop failed: A TLS error occurred.

which is far from helpful. Once that was sorted collectd started happily receiving data via MQTT and producing graphs for me:

This is a pretty long winded way of ending up with some temperature graphs - I could have just graphed the temperature sensor using collectd on the Pi to send it to the monitoring host, but it has allowed a simple MQTT broker, publisher + subscriber setup with TLS and authentication to be constructed and confirmed as working.

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

Faqet

Subscribe to AlbLinux agreguesi