You are here

Agreguesi i feed

Kalev Lember: gnome-software mini hackfest

Planet GNOME - Pre, 28/09/2018 - 4:23md

I am in London this week visiting Richard Hughes. We have been working out of his home office and giving some much needed love to GNOME Software.

Here is the main thing we’ve been working on:

Source selection drop down

GNOME Software now has a drop down list for choosing which source to use for installing an app. This is useful when someone has multiple repos enabled that all provide the same app, e.g. GIMP being available both as an RPM package from Fedora and a Flatpak from Flathub.

Previously, GNOME Software treated each version as a separate app and all the apps that were available from both Fedora and Flathub suddenly showed up twice: once for each source. This made browsing through featured apps and categories annoying as there was much repetition. Now instead GNOME Software consolidates them together into one entry and makes it possible to choose which one to install.

Richard worked mostly on the backend code and I did the user facing stuff. Allan Day helped us over the IRC with all the design (thanks Allan! you rock). It was so nice to be together in once office and be able to bounce ideas back and forth. We are hoping we can maybe start doing this more often.

This work is now on GNOME Software git master and will be in Fedora 30 as part of GNOME 3.32. If you are maintaining a Flathub app where it has its desktop file renamed, please check to make sure it has X-Flatpak-RenamedFrom correctly set in the .desktop file. This is needed for GNOME Software to correctly match renamed apps in Flathub to non-renamed ones available from distros. If the key is not there, it should be just the matter of rebuilding the app and it should automatically appear.

Some apps that have renamed desktop files in Flathub need https://github.com/flatpak/freedesktop-sdk-images/pull/122 to land for this to correctly work. Hopefully we can get this part sorted out next week.

It was a fun week and thanks to Richard for letting me stay at his place, and thanks to Red Hat for sponsoring my travels!

aristotledndalignments

Planet Debian - Enj, 27/09/2018 - 10:12md

Aristotle’s distinction in EN between brutishness and vice might be comparable to the distinction in Dungeons & Dragons between chaotic evil and lawful evil, respectively.

I’ve always thought that the forces of lawful evil are more deeply threatening than those of chaotic evil. In the Critical Hit podcast, lawful evil is equated with tyranny.

Of course, at least how I run it, Aristotelian ethics involves no notion of evil, only mistakes about the good.

Sean Whitton https://spwhitton.name//blog/ Notes from the Library

Debian Policy call for participation -- September 2018

Planet Debian - Enj, 27/09/2018 - 10:07md

Here’s a summary of some of the bugs against the Debian Policy Manual that are thought to be easy to resolve.

Please consider getting involved, whether or not you’re an existing contributor.

For more information, see our README.

#152955 force-reload should not start the daemon if it is not running

#172436 BROWSER and sensible-browser standardization

#188731 Also strip .comment and .note sections

#212814 Clarify relationship between long and short description

#273093 document interactions of multiple clashing package diversions

#314808 Web applications should use /usr/share/package, not /usr/share/doc/package

#348336 Clarify Policy around shared configuration files

#425523 Describe error unwind when unpacking a package fails

#491647 debian-policy: X font policy unclear around TTF fonts

#495233 debian-policy: README.source content should be more detailed

#649679 [copyright-format] Clarify what distinguishes files and stand-alone license paragraphs.

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#685746 debian-policy Consider clarifying the use of recommends

#694883 copyright-format: please clarify the recommended form for public domain files

#696185 [copyright-format] Use short names from SPDX.

#697039 expand cron and init requirement to check binary existence to other scripts

#722535 debian-policy: To document: the “Binary-Only” field in Debian changes files.

#759316 Document the use of /etc/default for cron jobs

#770440 debian-policy: policy should mention systemd timers

#780725 PATH used for building is not specified

#794653 Recommend use of dpkg-maintscript-helper where appropriate

#809637 DEP-5 does not support filenames with blanks

#824495 debian-policy: Source packages “can” declare relationships

#833401 debian-policy: virtual packages: dbus-session-bus, dbus-default-session-bus

#845715 debian-policy: Please document that packages are not allowed to write outside their source directories

#850171 debian-policy: Addition of having an ‘EXAMPLES’ section in manual pages debian policy 12.1

#853779 debian-policy: Clarify requirements about update-rc.d and invoke-rc.d usage in maintainer scripts

#904248 Add netbase to build-essential

Sean Whitton https://spwhitton.name//blog/ Notes from the Library

My Work on Debian LTS (September 2018)

Planet Debian - Enj, 27/09/2018 - 11:40pd

In September 2018, I did 10 hours of work on the Debian LTS project as a paid contributor. Thanks to all LTS sponsors for making this possible.

This is my list of work done in September 2018:

  • Upload of polarssl (DLA 1518-1) [1].
  • Work on CVE-2018-16831 discovered in the smarty3 package. Plan (A) was to backport latest smarty3 release to Debian stretch and jessie, but runtime tests against GOsa² (one of the PHP applications that utilize smarty3) already failed for Debian stretch. So, this plan was dropped. Plan (B) then was extracting a patch [2] for fixing this issue in Debian stretch's smarty3 package version from a manifold of upstream code changes; finally with the realization that smarty3 in Debian jessie is very likely not affected. Upstream feedback is still pending, upload(s) will occur in the coming week (first week of Octobre).

light+love
Mike

References

[1] https://lists.debian.org/debian-lts-announce/2018/09/msg00029.html

[2] https://salsa.debian.org/debian/smarty3/commit/8a1eb21b7c4d971149e76cd2b...

sunweaver http://sunweavers.net/blog/blog/1 sunweaver's blog

A nice oneliner

Planet Debian - Mër, 26/09/2018 - 7:51md

Pop quiz! Let's say I have a datafile describing some items (images and feature points in this example):

# filename x y 000.jpg 79.932824 35.609049 000.jpg 95.174662 70.876506 001.jpg 19.655072 52.475315 002.jpg 19.515351 33.077847 002.jpg 3.010392 80.198282 003.jpg 84.183099 57.901647 003.jpg 93.237358 75.984036 004.jpg 99.102619 7.260851 005.jpg 24.738357 80.490116 005.jpg 53.424477 27.815635 .... .... 149.jpg 92.258132 99.284486

How do I get a random subset of N images, using only the shell and standard commandline tools?

Bam!

$ N=5; ( echo '# filename'; seq 0 149 | shuf | head -n $N | xargs -n1 printf "%03d.jpg\n" | sort) | vnl-join -j filename input.vnl - # filename x y 017.jpg 41.752204 96.753914 017.jpg 86.232504 3.936258 027.jpg 41.839110 89.148368 027.jpg 82.772742 27.880592 067.jpg 57.790706 46.153623 067.jpg 87.804939 15.853087 076.jpg 41.447477 42.844849 076.jpg 93.399829 64.552090 142.jpg 18.045497 35.381083 142.jpg 83.037867 17.252172 Dima Kogan http://notes.secretsauce.net Dima Kogan

Sam Thursfield: Writing well

Planet GNOME - Mër, 26/09/2018 - 2:37md

We rely on written language to develop software. I used to joke that I worked as a professional email writer rather than a computer programmer (and it wasn’t really a joke). So if you want to be a better engineer, I recommend that you focus some time on improving your written English.

I recently bought 100 Ways to Improve Your Writing by Gary Provost, which is a compact and rewarding book full of simple and widely applicable guidelines to writers. My advice is to buy a copy!

You can also find plenty of resources online. Start by improving your commit messages. Since we love to automate things, try these shell scripts that catch common writing mistakes. And every time you write a paragraph simply ask yourself: what is the purpose of this paragraph? Is it serving that purpose?

Native speakers and non-native speakers will both find useful advice in Gary Provost’s book. In the UK school system we aren’t taught this stuff particularly well. Many English-as-a-second-language courses don’t teach how to write on a “macro” level either, which is sad because there are many differences from language to language that non-natives need to be aware of. I have seen “Business English” courses that focus on clear and convincing communication, so you may want to look into one of those if you want more than just a book.

Code gets read more than it gets written, so it’s worth taking extra time so that it’s easy for future developers to read. The same is true of emails that you write to project mailing lists. If you want to make a positive change to development of your project, don’t just focus on the code — see if you can find 3 ways to improve the clarity of your writing.

Shannon’s Ghost

Planet Debian - Mër, 26/09/2018 - 4:34pd

I’m spending the 2018-2019 academic year as a fellow at the Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford.

Claude Shannon on a bicycle.

Every CASBS study is labeled with a list of  “ghosts” who previously occupied the study. This year, I’m spending the year in Study 50 where I’m haunted by an incredible cast that includes many people whose scholarship has influenced and inspired me.

The top part of the list of ghosts in Study #50 at CASBS.

Foremost among this group is Study 50’s third occupant: Claude Shannon

At 21 years old, Shannon’s masters thesis (sometimes cited as the most important masters thesis in history) proved that electrical circuits could encode any relationship expressible in Boolean logic and opened the door to digital computing. Incredibly, this is almost never cited as Shannon’s most important contribution. That came in 1948 when he published a paper titled A Mathematical Theory of Communication which effectively created the field of information theory. Less than a decade after its publication, Aleksandr Khinchin (the mathematician behind my favorite mathematical constant) described the paper saying:

Rarely does it happen in mathematics that a new discipline achieves the character of a mature and developed scientific theory in the first investigation devoted to it…So it was with information theory after the work of Shannon.

As someone whose own research is seeking to advance computation and mathematical study of communication, I find it incredibly propitious to be sharing a study with Shannon.

Although I teach in a communication department, I know Shannon from my background in computing. I’ve always found it curious that, despite the fact Shannon’s 1948 paper is almost certainly the most important single thing ever published with the word “communication” in its title, Shannon is rarely taught in communication curricula is sometimes completely unknown to communication scholars.

In this regard, I’ve thought a lot about this passage in Robert’s Craig’s  influential article “Communication Theory as a Field” which argued:

In establishing itself under the banner of communication, the discipline staked an academic claim to the entire field of communication theory and research—a very big claim indeed, since communication had already been widely studied and theorized. Peters writes that communication research became “an intellectual Taiwan-claiming to be all of China when, in fact, it was isolated on a small island” (p. 545). Perhaps the most egregious case involved Shannon’s mathematical theory of information (Shannon & Weaver, 1948), which communication scholars touted as evidence of their field’s potential scientific status even though they had nothing whatever to do with creating it, often poorly understood it, and seldom found any real use for it in their research.

In preparation for moving into Study 50, I read a new biography of Shannon by Jimmy Soni and Rob Goodman and was excited to find that Craig—although accurately describing many communication scholars’ lack of familiarity—almost certainly understated the importance of Shannon to communication scholarship.

For example, the book form of Shannon’s 1948 article was published by University Illinois on the urging of and editorial supervision of Wilbur Schramm (one of the founders of modern mass communication scholarship) who was a major proponent of Shannon’s work. Everett Rogers (another giant in communication) devotes a chapter of his “History of Communication Studies”² to Shannon and to tracing his impact in communication. Both Schramm and Rogers built on Shannon in parts of their own work. Shannon has had an enormous impact, it turns out, in several subareas of communication research (e.g., attempts to model communication processes).

Although I find these connections exciting. My own research—like most of the rest of communication—is far from the substance of technical communication processes at the center of Shannon’s own work. In this sense, it can be a challenge to explain to my colleagues in communication—and to my fellow CASBS fellows—why I’m so excited to be sharing a space with Shannon this year.

Upon reflection, I think it boils down to two reasons:

  1. Shannon’s work is both mathematically beautiful and incredibly useful. His seminal 1948 article points to concrete ways that his theory can be useful in communication engineering including in compression, error correcting codes, and cryptography. Shannon’s focus on research that pushes forward the most basic type of basic research while remaining dedicated to developing solutions to real problems is a rare trait that I want to feature in my own scholarship.
  2. Shannon was incredibly playful. Shannon played games, juggled constantly, and was always seeking to teach others to do so. He tinkered, rode unicycles, built a flame-throwing trumpet, and so on. With Marvin Minsky, he invented the “ultimate machine”—a machine that’s only function is to turn itself off—which he kept on his desk.

    A version of the Shannon’s “ultimate machine” that is sitting on my desk at CASBS.

I have no misapprehension that I will accomplish anything like Shannon’s greatest intellectual achievements during my year at CASBS. I do hope to be inspired by Shannon’s creativity, focus on impact, and playfulness. In my own little ways, I hope to build something at CASBS that will advance mathematical and computational theory in communication in ways that Shannon might have appreciated.

  1. Incredibly, the year that Shannon was in Study 50, his neighbor in Study 51 was Milton Friedman. Two thoughts: (i) Can you imagine?! (ii) I definitely chose the right study!
  2. Rogers book was written, I found out, during his own stint at CASBS. Alas, it was not written in Study 50.
Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

GSoC 2018: Final Report

Planet Debian - Mar, 25/09/2018 - 7:08md


This is my final report of my Google Summer of Code 2018, it also serves as my final code submission.

For the last 3 months I have been working with Debian on the project Extracting Data from PDF Invoices and Bills Details. Information about the project can be found here: 

https://wiki.debian.org/SummerOfCode2018/Projects/ExtractingDataFromPDFInvoicesAndBillsDetails.

My mentor and I agreed to modify the work to be done in the Summer. Already discussed here: http://blog.harshitjoshi.in/2018/05/gsoc-2018-debian-community-bonding.html
We will advance the ecosystem for machine-readable invoice exchange and make it easily accessible for the whole Python community by making the following contributions:
  • Python library to read/write/add/edit Factur-x metadata in different XML-flavors in Python.
  • Command line interface to process PDF files and access the main library functions.
  • Way to add structured data to existing files or from legacy accounting systems. (via invoice2data project)
  • New desktop GUI to add, edit, import and export factur-x metadata in- and out of PDF files.
Short overviewThe project work can be bifurcated into two parts:
  • Main Deliverable: GUI creation for Factur-X Library
  • Pre-requisites for Main Deliverable: Improvements to invoice2data library and updating Factur-X library to a working state
Contributions to invoice2dataA modular Python library to support your accounting process. Tested on Python 2.7 and 3.4+. Main steps:
  1. extracts text from PDF files using different techniques, like pdftotext, pdfminer or tesseract OCR.
  2. searches for regex in the result using a YAML-based template system
  3. saves results as CSV, JSON or XML or renames PDF files to match the content.
My contributions: https://github.com/invoice-x/invoice2data/commits?author=duskybomb
Contributions to Factur-XFactur-X is a EU standard for embedding XML representations of invoices in PDF files. This library provides an interface for reading, editing and saving the this metadata. My contributions: https://github.com/invoice-x/factur-x-ng/commits?author=duskybomb
Organisation PageAn organisation created on github, invoice-x, to tie down all the repository at a single place.
link to organisation page: https://github.com/invoice-x/
Organisation Website A static website briefly explaining the whole project. Link to website: https://www.invoice-x.org/
Main Deliverable RepositoryThis repository contains the code to make GUI for Factur-x Library. Link to the repository: https://github.com/invoice-x/invoicex-gui

invoicex-gui: invoice2data integration with invoicex-gui and factur-x-ng
OverviewPre-requisites for Main DeliverableFactur-XTo work on GUI creation for Factur-X, I first needed to update Factur-x library to a working state. My mentor, Manuel, did the initial refactoring of the project after forking the original repository, https://github.com/akretion/factur-x.

Since then I have added a few features to the library:
  • Fix checking of embedded resources
  • Converting the documentation format from md to rst
  • Added unit tests for factur-x
  • Added new feature to export metadata in JSON and YAML format
  • Cleaned XML template to add
  • Added validation of country and currency codes with ISO standards.
  • Implemented Command Line Options
Invoice2dataI started contributing to invoice2data in the month of February. Invoice2data became the first open source project I contributed to. The first contribution was just fixing a typo in the documentation, but this introduced me to the world of Free Open Source Software (FOSS).

Since, I have been selected for Google Summer of Code 2018, I have added the following commits:
  • Removed required fields in favour of providing flexibility to extract data
  • Added feature to extract all fields mentioned in template
  • Updated README and worked on conversion of md to rst
  • Added checks for dependencies: tesseract and imagemagick
  • Changed subprocess input form normal string to list
  • Added more tests and checked coverage locally
  • Fixed the ways invoice2data handles lists
Main DeliverableInvoicex-GUI My main deliverable was to make Graphical User Interface for Factur-X library. For this I used PyQt-5 framework. The other options for the same were Kivy and wxWidgets. I have some prior experience with PyQt-5 and a bug in Kivy related to touchpad driver of Debian inclined me to use PyQt-5.

The making the GUI was one of the most challenging part of the GSoC project. The lack of documentation for PyQt-5 didn’t help much. I have 3 years of experience with C++ and used it to learn more about PyQt-5 through its original documentation for Qt which is in C++.

The GUI includes:
  • Selected PDF and searching for any embedded standard
  • If no standard is found, give a pop up to select the standard to be added
  • Edit metadata of existing embedded standard
  • Export metadata
  • Validate Metadata
  • Use invoice2data to extract field data from invoice
Weekly Work Donehttps://lists.debian.org/debian-outreach/2018/05/msg00015.html (week 1)
https://lists.debian.org/debian-outreach/2018/05/msg00029.html (week 2)
https://lists.debian.org/debian-outreach/2018/06/msg00003.html (week 3)
https://lists.debian.org/debian-outreach/2018/06/msg00029.html (week 4)
https://lists.debian.org/debian-outreach/2018/06/msg00078.html (week 5)
https://lists.debian.org/debian-outreach/2018/06/msg00106.html (week 6)
https://lists.debian.org/debian-outreach/2018/06/msg00136.html (week 7)
https://lists.debian.org/debian-outreach/2018/07/msg00019.html (week 8)
https://lists.debian.org/debian-outreach/2018/07/msg00072.html (week 9, 10)
https://lists.debian.org/debian-outreach/2018/07/msg00105.html (week 11)
https://lists.debian.org/debian-outreach/2018/08/msg00011.html (week 12)  Harshit Joshi noreply@blogger.com Harshit Joshi's Blog

Reproducible Builds: Weekly report #178

Planet Debian - Mar, 25/09/2018 - 6:53md

Here’s what happened in the Reproducible Builds effort between Sunday September 16 and Saturday September 22 2018:

Patches filed diffoscope development

diffoscope version 102 was uploaded to Debian unstable by Mattia Rizzolo. It included contributions already covered in previous weeks as well as new ones from:

Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org this month, including:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Daniel Shahaf, Holger Levsen, Jelle van der Waa, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Crossing the Great St Bernard Pass

Planet Debian - Mar, 25/09/2018 - 4:26md

It's a great day for the scenic route to Italy, home of Beethoven's Swiss cousins.

What goes up, must come down...

Descent into the Aosta valley

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Smallish haul

Planet Debian - Mar, 25/09/2018 - 6:34pd

It's been a little while since I've made one of these posts, and of course I'm still picking up this and that. Books won't buy themselves!

Elizabeth Bear & Katherine Addison — The Cobbler's Boy (sff)
P. Djèlí Clark — The Black God's Drums (sff)
Sabine Hossenfelder — Lost in Math (nonfiction)
N.K. Jemisin — The Dreamblood Duology (sff)
Mary Robinette Kowal — The Calculating Stars (sff)
Yoon Ha Lee — Extracurricular Activities (sff)
Seanan McGuire — Night and Silence (sff)
Bruce Schneier — Click Here to Kill Everyone (nonfiction)

I have several more pre-orders that will be coming out in the next couple of months. Still doing lots of reading, but behind on writing up reviews, since work has been busy and therefore weekends have been low-energy. That should hopefully change shortly.

Russ Allbery https://www.eyrie.org/~eagle/ Eagle's Path

Archiving web sites

Planet Debian - Mar, 25/09/2018 - 2:00pd

I recently took a deep dive into web site archival for friends who were worried about losing control over the hosting of their work online in the face of poor system administration or hostile removal. This makes web site archival an essential instrument in the toolbox of any system administrator. As it turns out, some sites are much harder to archive than others. This article goes through the process of archiving traditional web sites and shows how it falls short when confronted with the latest fashions in the single-page applications that are bloating the modern web.

Converting simple sites

The days of handcrafted HTML web sites are long gone. Now web sites are dynamic and built on the fly using the latest JavaScript, PHP, or Python framework. As a result, the sites are more fragile: a database crash, spurious upgrade, or unpatched vulnerability might lose data. In my previous life as web developer, I had to come to terms with the idea that customers expect web sites to basically work forever. This expectation matches poorly with "move fast and break things" attitude of web development. Working with the Drupal content-management system (CMS) was particularly challenging in that regard as major upgrades deliberately break compatibility with third-party modules, which implies a costly upgrade process that clients could seldom afford. The solution was to archive those sites: take a living, dynamic web site and turn it into plain HTML files that any web server can serve forever. This process is useful for your own dynamic sites but also for third-party sites that are outside of your control and you might want to safeguard.

For simple or static sites, the venerable Wget program works well. The incantation to mirror a full web site, however, is byzantine:

$ nice wget --mirror --execute robots=off --no-verbose --convert-links \ --backup-converted --page-requisites --adjust-extension \ --base=./ --directory-prefix=./ --span-hosts \ --domains=www.example.com,example.com http://www.example.com/

The above downloads the content of the web page, but also crawls everything within the specified domains. Before you run this against your favorite site, consider the impact such a crawl might have on the site. The above command line deliberately ignores [robots.txt][] rules, as is now common practice for archivists, and hammer the website as fast as it can. Most crawlers have options to pause between hits and limit bandwidth usage to avoid overwhelming the target site.

The above command will also fetch "page requisites" like style sheets (CSS), images, and scripts. The downloaded page contents are modified so that links point to the local copy as well. Any web server can host the resulting file set, which results in a static copy of the original web site.

That is, when things go well. Anyone who has ever worked with a computer knows that things seldom go according to plan; all sorts of things can make the procedure derail in interesting ways. For example, it was trendy for a while to have calendar blocks in web sites. A CMS would generate those on the fly and make crawlers go into an infinite loop trying to retrieve all of the pages. Crafty archivers can resort to regular expressions (e.g. Wget has a --reject-regex option) to ignore problematic resources. Another option, if the administration interface for the web site is accessible, is to disable calendars, login forms, comment forms, and other dynamic areas. Once the site becomes static, those will stop working anyway, so it makes sense to remove such clutter from the original site as well.

JavaScript doom

Unfortunately, some web sites are built with much more than pure HTML. In single-page sites, for example, the web browser builds the content itself by executing a small JavaScript program. A simple user agent like Wget will struggle to reconstruct a meaningful static copy of those sites as it does not support JavaScript at all. In theory, web sites should be using progressive enhancement to have content and functionality available without JavaScript but those directives are rarely followed, as anyone using plugins like NoScript or uMatrix will confirm.

Traditional archival methods sometimes fail in the dumbest way. When trying to build an offsite backup of a local newspaper (pamplemousse.ca), I found that WordPress adds query strings (e.g. ?ver=1.12.4) at the end of JavaScript includes. This confuses content-type detection in the web servers that serve the archive, which rely on the file extension to send the right Content-Type header. When such an archive is loaded in a web browser, it fails to load scripts, which breaks dynamic websites.

As the web moves toward using the browser as a virtual machine to run arbitrary code, archival methods relying on pure HTML parsing need to adapt. The solution for such problems is to record (and replay) the HTTP headers delivered by the server during the crawl and indeed professional archivists use just such an approach.

Creating and displaying WARC files

At the Internet Archive, Brewster Kahle and Mike Burner designed the ARC (for "ARChive") file format in 1996 to provide a way to aggregate the millions of small files produced by their archival efforts. The format was eventually standardized as the WARC ("Web ARChive") specification that was released as an ISO standard in 2009 and revised in 2017. The standardization effort was led by the International Internet Preservation Consortium (IIPC), which is an "international organization of libraries and other organizations established to coordinate efforts to preserve internet content for the future", according to Wikipedia; it includes members such as the US Library of Congress and the Internet Archive. The latter uses the WARC format internally in its Java-based Heritrix crawler.

A WARC file aggregates multiple resources like HTTP headers, file contents, and other metadata in a single compressed archive. Conveniently, Wget actually supports the file format with the --warc parameter. Unfortunately, web browsers cannot render WARC files directly, so a viewer or some conversion is necessary to access the archive. The simplest such viewer I have found is pywb, a Python package that runs a simple webserver to offer a Wayback-Machine-like interface to browse the contents of WARC files. The following set of commands will render a WARC file on http://localhost:8080/:

$ pip install pywb $ wb-manager init example $ wb-manager add example crawl.warc.gz $ wayback

This tool was, incidentally, built by the folks behind the Webrecorder service, which can use a web browser to save dynamic page contents.

Unfortunately, pywb has trouble loading WARC files generated by Wget because it followed an inconsistency in the 1.0 specification, which was fixed in the 1.1 specification. Until Wget or pywb fix those problems, WARC files produced by Wget are not reliable enough for my uses, so I have looked at other alternatives. A crawler that got my attention is simply called crawl. Here is how it is invoked:

$ crawl https://example.com/

(It does say "very simple" in the README.) The program does support some command-line options, but most of its defaults are sane: it will fetch page requirements from other domains (unless the -exclude-related flag is used), but does not recurse out of the domain. By default, it fires up ten parallel connections to the remote site, a setting that can be changed with the -c flag. But, best of all, the resulting WARC files load perfectly in pywb.

Future work and alternatives

There are plenty more resources for using WARC files. In particular, there's a Wget drop-in replacement called Wpull that is specifically designed for archiving web sites. It has experimental support for PhantomJS and youtube-dl integration that should allow downloading more complex JavaScript sites and streaming multimedia, respectively. The software is the basis for an elaborate archival tool called ArchiveBot, which is used by the "loose collective of rogue archivists, programmers, writers and loudmouths" at ArchiveTeam in its struggle to "save the history before it's lost forever". It seems that PhantomJS integration does not work as well as the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools to mirror more complex sites. For example, snscrape will crawl a social media profile to generate a list of pages to send into ArchiveBot. Another tool the team employs is crocoite, which uses the Chrome browser in headless mode to archive JavaScript-heavy sites.

This article would also not be complete without a nod to the HTTrack project, the "website copier". Working similarly to Wget, HTTrack creates local copies of remote web sites but unfortunately does not support WARC output. Its interactive aspects might be of more interest to novice users unfamiliar with the command line.

In the same vein, during my research I found a full rewrite of Wget called Wget2 that has support for multi-threaded operation, which might make it faster than its predecessor. It is missing some features from Wget, however, most notably reject patterns, WARC output, and FTP support but adds RSS, DNS caching, and improved TLS support.

Finally, my personal dream for these kinds of tools would be to have them integrated with my existing bookmark system. I currently keep interesting links in Wallabag, a self-hosted "read it later" service designed as a free-software alternative to Pocket (now owned by Mozilla). But Wallabag, by design, creates only a "readable" version of the article instead of a full copy. In some cases, the "readable version" is actually unreadable and Wallabag sometimes fails to parse the article. Instead, other tools like bookmark-archiver or reminiscence save a screenshot of the page along with full HTML but, unfortunately, no WARC file that would allow an even more faithful replay.

The sad truth of my experiences with mirrors and archival is that data dies. Fortunately, amateur archivists have tools at their disposal to keep interesting content alive online. For those who do not want to go through that trouble, the Internet Archive seems to be here to stay and Archive Team is obviously working on a backup of the Internet Archive itself.

This article first appeared in the Linux Weekly News.

As usual, here's the list of issues and patches generated while researching this article:

I also want to personally thank the folks in the #archivebot channel for their assistance and letting me play with their toys.

The Pamplemousse crawl is now available on the Internet Archive, it might end up in the wayback machine at some point if the Archive curators think it is worth it.

Another example of a crawl is this archive of two Bloomberg articles which the "save page now" feature of the Internet archive wasn't able to save correctly. But webrecorder.io could! Those pages can be seen in the web recorder player to get a better feel of how faithful a WARC file really is.

Finally, this article was originally written as a set of notes and documentation in the archive page which may also be of interest to my readers.

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

Christian Schaller: Getting the team together to revolutionize Linux audio

Planet GNOME - Hën, 24/09/2018 - 9:35md

So anyone reading my blog posts would probably have picked up on my excitement for the PipeWire project, the effort to unify the world of Linux audio, add an equivalent video bit and provide multimedia handling capabilities to containerized applications. The video part as I have mentioned before was the critical first step and that is starting to look really good with the screen sharing functionality in GNOME shell already using PipeWire and equivalent PipeWire support being added to KDE by Jan Grulich. We have internal patches for both Firefox and Chrome(ium) which we are polishing up to propose them upstream, but we will in the meantime offer them as downstream patches in Fedora as soon as they are ready for primetime. Once those patches are deployed you should have any browser based desktop sharing software, like Google Hangouts, working fully under Wayland (and X).

With the video part of PipeWire already in production we decided the time has come to try to accelerate the development of the audio bits. So PipeWire creator Wim Taymans, PulseAudio developer Arun Raghavan and myself decided to try to host a PipeWire hackfest this fall to bring together many of the core Linux audio developers to try to hash out a plan and a roadmap. So I am very happy to say that at the end of October we will have a gathering in Edinburgh to work on this and the critical people we where hoping to have there are coming. Filipe Coelho who is the current lead developer on Jack will be there alongside Arun Raghavan, Colin Guthrie and Tanu Kaskinen from PulseAudio, Bastien Nocera from the GNOME project and Jan Grulich from KDE will be there representing desktop integration and finally Nirbheek Chauhan, Nicolas Dufresne and George Kiagiadakis from the GStreamer project. I think we have about the right amount of people for this to be productive and at the same time have representation from everyone who needs to be there, so I am feeling very optimistic that we can come out of this event with both a plan for what we want to do and the right people involved to make it happen. The idea that we can have a shared infrastructure for consumer level audio and pro-audio under Linux really excites me and I do believe that if we do this right Linux will take a huge step forward as a natural home for pro-audio desktop users.

A big thanks you to the GNOME Foundation for sponsoring this event and allow us to bring all this people together!

VLC in Debian now can do bittorrent streaming

Planet Debian - Hën, 24/09/2018 - 9:20md

Back in February, I got curious to see if VLC now supported Bittorrent streaming. It did not, despite the fact that the idea and code to handle such streaming had been floating around for years. I did however find a standalone plugin for VLC to do it, and half a year later I decided to wrap up the plugin and get it into Debian. I uploaded it to NEW a few days ago, and am very happy to report that it entered Debian a few hours ago, and should be available in Debian/Unstable tomorrow, and Debian/Testing in a few days.

With the vlc-plugin-bittorrent package installed you should be able to stream videos using a simple call to

vlc https://archive.org/download/TheGoat/TheGoat_archive.torrent

It can handle magnet links too. Now if only native vlc had bittorrent support. Then a lot more would be helping each other to share public domain and creative commons movies. The plugin need some stability work with seeking and picking the right file in a torrent with many files, but is already usable. Please note that the plugin is not removing downloaded files when vlc is stopped, so it can fill up your disk if you are not careful. Have fun. :)

I would love to get help maintaining this package. Get in touch if you are interested.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

Ismael Olea: Wacom's graphic tablet sizes

Planet GNOME - Hën, 24/09/2018 - 8:28md

For some reasons I’ve been looking for second hand Wacom graphic tablets. To me has been annoying to find out which size is for each model. So I’m writing here the list of the models I gathered.

The reason for looking only for Wacoms is because these days seem to be very well supported in Linux, at least the old models you can get second hand.

model active area size CTL460 147,2 x 92,0 mm CTL 420 127.6 x 92.8 mm CTE-430 Graphire 3 127 x 101 mm CTF-430 127.6 x 92.8 mm CTL 460 147,2 x 92,0 mm CTH-460 147,2 x 92,0 mm CTH-461 147,2 x 92,0 mm CTH-470 147,2 x 92,0 mm CTL-470 147,2 x 92,0 mm CTL-480 Intuos 152 x 95 mm CTE-640 208.8 x 150.8 mm CTE-650 216.5 x 135.3 mm CTH-661 215.9 x 137.16 mm CTH-670 217 x 137 mm ET-0405A-U 127 x 106 mm Graphire 2 127.6 x 92.8 mm Intuos 2 127.6 x 92.8 mm (probably) Volito 2 127.6 x 92.8 mm


As a reference these are the standards DIN sizes comparable with those models:

DIN type   size A4 210 x 297 mm A5 148 x 210 mm A6 105 x 148 mm

If you find any typo or want to add other models feel free to comment.

Zeeshan Ali: Recently in Geoclue

Planet GNOME - Hën, 24/09/2018 - 8:54pd
After I started working for Collabora in April, I've finally been able to put some time on maintenance and development of Geoclue again. While I've fixed quite a few issues on the backlog, there has been some significant changes as of late, that I felt deserves some highlighting. Hence this blog post.

Leaving security to where it belongs 
Since people's location is a very sensitive piece of information, security of this information had been the core part of Geoclue2 design. The idea was (and still is) to only allow apps access to user's location with their explicit permission (that they could easily revoke later). When Geoclue2 was designed and then developed, we didn't have Flatpak. Surely, people were talking about the need for something like Flatpak but even with those ideas, it wasn't clear how location access will be handled.

Hence we decided for geoclue to handle this itself, through an external app authorizing agent and implemented such an agent in GNOME Shell. Since there is no reliable way to identify an app on Linux, there were mixed reactions to this approach. While some thought it's good to have something rather than nothing, others thought it's better to wait for the time when we've the infrastructure that allows us to reliably identify apps.

Fast forward to an year or so ago, when Flatpak portals became a thing, I had a long discussion with Matthias Clasen and Bastien Nocera about how geoclocation should work in Flatpak. We disagreed on our approach and we forgot about the whole thing then.

Some months ago, we had to make app authorizing agent compulsory to plug some security holes and that made a lot of people who don't use GNOME, unhappy. We had to start installing the demo agent for non-GNOME as a workaround. This forced me to rethink the whole approach and after some more long discussions with Matthias and a lot of thinking, the plan is to:
  • Create a Flatpak geolocation portal. Matthias already has a work-in-progress implementation. I really wanted the portal API to be as identical to the Geoclue API but I failed to convince Matthias on that. This is not that big an issue though, as at least the apps using GeoclueSimple API will not need to change anything for accessing location from inside the Flatpak sandbox.
  • Drop all authorization from Geoclue and leave that to the geolocation portal. I've already dropped authorization for non-flatpak (i-e system) apps in git master. Once the portal is in place and GNOME shell and control-center have been modified to talk to it, we can drop all app authorizing code from Geoclue.

    Note
    that we have been able to reliably identify Flatpak apps and it's only the system apps that can lie about their identity.
A modern build system 
Like many Free Software projects, Geoclue is also now using Meson for its builds. After it started to work reliably, I also dropped autotools-based build completely. The faster build makes development a much more pleasant experience.

And a modern issue tracker to go with it Bugzilla served us well but patches in Bugzilla are no fun, even though git-bz makes it much much better. So when Daniel Stone setup gitlab on freedesktop.org, Geoclue was one of the first few projects to move to gitlab. Now it's much easier and simpler to contribute to Geoclue.

Minimize GeoIP use While GeoIP is a nice backup if you have neither WiFi hardware nor a cellular modem, Geoclue would also use (only) that if an app only asked for city-level accuracy. Apps like GNOME Weather and GNOME Clocks ask for only that since that's the info they need and don't need to know which street you're currently on. This would be perfect if only the GeoIP database being used would be correct or accurate for at least 90% of the IP addresses but unfortunately the reality is far from that. This meant, a significant number of people getting annoyed with these apps showing them time and weather of a different town than their current one.

On the other hand, we couldn't just use a more accurate geolocation source (WiFi) since an app should not get more accurate location it asked for and it was authorized for by the user. While currently we don't have the UI in GNOME (or any other platform) that allows users to control the location accuracy, the infrastructure has always been in place to do that.

Recently one person decided to not only report this but had a good suggestion that I recently implemented: Use WiFi geolocation for city-level accuracy as well but randomize the location enough to mitigate the privacy concerns. It should be noted that while this solution ensures that apps don't get more accurate location then they should, it still means sending out the current WiFi data to the Mozilla Location Service (MLS) and Geoclue getting a very accurate (street-level) location in response. It's all over HTTPS so it's not as bad as it sounds.

The future of Mozilla Location Service
When Mozilla announced their location service in late 2013, Geoclue became one of it's first users as it was our only hope for a reliable WiFi-geolocation source. We couldn't use Google's service as their ToC don't allow it to be used in an open source project (I recall some clause that it can only be used with Google Maps and not any other Map software). MLS was a huge success in terms of people contributing WiFi data to it. I've been to quite a few places around Europe and North America in the last few years and I haven't been to any location, that is not already covered by MLS.
Mozilla's own interest in this service was tied to their Firefox OS project. Unfortunately Firefox OS project was abandoned two years ago and Mozilla lost its interest in MLS as a result. Mozilla folks are the good guys so they have kept the service running and users can still contribute data but it's no longer developed or maintained.
Since this is a very important service for all users of geoclue, I feel very uneasy about this uncertain future of MLS. So consider this a call for help. If your company relies on MLS (directly or through Geoclue) and you'd want to secure the future of Open Source geolocation, please do get in touch and we can discuss how we could possibly achieve that.

Philip Chimento: JavaScript news from GNOME 3.30

Planet GNOME - Hën, 24/09/2018 - 7:53pd
JavaScript news from GNOME 3.30

Welcome back to the latest news on GJS, the Javascript engine that powers GNOME Shell, Endless OS, and many GNOME apps.

I haven’t done one of these posts for several versions now, but I think it’s a good tradition to continue. GNOME 3.30 has been released for several weeks now, and while writing this post I just released the first bugfix update, GJS 1.54.1. Here’s what’s new!

If you prefer to watch videos rather than read, see my GUADEC talk on the subject.

JavaScript upgrade!

GJS is based on SpiderMonkey, which is the name of the JavaScript engine from Mozilla Firefox. We now use the version of SpiderMonkey from Firefox 60. (The way it goes is that we upgrade whenever Firefox makes an extended support release (ESR), which happens about once a year.)

This brings a few language improvements: not as many as in 2017 when we zipped through a backlog of four ESRs in one year, but here’s a short list:

  • Asynchronous iterators (for await (... in ...))
  • Rest operator in object destructuring (var {a, b, ...cd} = ...)
  • Spread operator in object literals (obj3 = {...obj1, ...obj2})
  • Anonymous catch (catch {...} instead of catch (e) {...})
  • Promise.prototype.finally()

There are also some removals from the language, of Mozilla-specific extensions that never made it into the web standards.

  • Conditional catch (catch (e if ...))
  • For-each-in loops (for each (... in ...))
  • Legacy lambda syntax (function (x) x * x)
  • Legacy iterator protocol
  • Array and generator comprehensions ([for (x of iterable) expr(x)])

Hopefully you weren’t using any of these, because they will not even parse anymore! I wrote a tool called moz60tool that will scan your source files and hopefully flag any uses of the removed syntax. It’s also available as a shell extension by Andy Holmes.’

Time for your code to get a checkup… Photo by rawpixel.com on Pexels.com

ByteArray

A special note about ByteArray: the SpiderMonkey upgrade made it necessary to rewrite the ByteArray class, since support for intercepting property accesses in C++-native JS objects was removed, and that was what ByteArray used internally to implement expressions like bytearray[5].

The replacement API I think would have made performance worse, and ByteArray is pretty performance critical; so I took the opportunity to replace ByteArray with JavaScript’s built-in Uint8Array. (Uint8Array didn’t exist when GJS was invented.) For this, I implemented a feature in SpiderMonkey that allows you to store a GBytes inside a JavaScript ArrayBuffer object.

The result is not 100% backwards compatible. Some functions now return a Uint8Array object instead of a ByteArray and there’s not really a way around that. The two are not really unifiable; Uint8Array’s length is immutable, for one thing. If you want the old behaviour back, you can call new ByteArray.ByteArray() on the returned Uint8Array and all the rest of your code should work as before. However, the legacy ByteArray will have worse performance than the Uint8Array, so instead you should port your code.

Technical Preview: Async Operations

The subject of Avi Zajac’s summer internship was integrating Promises and async functions with GIO’s asynchronous operations. That is, instead of this,

file.load_contents_async(null, (obj, res) => { const [, contentsBytes, etag] = obj.load_contents_finish(res); print(ByteArray.toString(contentsBytes)); });

you should be able to do this:

file.load_contents_async(null) .then((contentsBytes, etag) => print(ByteArray.toString(contentsBytes)));

or this:

const [contentsBytes, etag] = await file.load_contents_async(null); print(ByteArray.toString(contentsBytes));

If you don’t pass in a callback to the operation, it assumes you want a Promise instead of a callback, and will return one so that you can call .then() on it, or use it in an await expression.

This feature is a technology preview in GNOME 3.30 meaning, you must opt in for each method that you want to use it with. Opt in by executing this code at the startup of your program:

Gio._promisify(classPrototype, asyncMethodName, finishMethodName);

This is made a bit extra complicated for file operations, because Gio.File is actually an interface, not a class, and because of a bug where JS methods on interface prototypes are ignored. We also provide a workaround API for this, which unfortunately only works on local (disk) files. So the call to enable the above load_contents_async() code would look like this:

Gio._promisify(Gio._LocalFilePrototype, 'load_contents_async', 'load_contents_finish');

And, of course, if you are using an older GNOME version than 3.30 but you still want to use this feature, you can just copy the Promisify code into your own program, if the license is suitable. I’ve already been writing some code for Endless Hack in this way and it is so convenient that I never want to go back.

Debugger

At long last, there is a debugger. Run it with gjs -d yourscript.js!

The debugger commands should be familiar if you’ve ever used GDB. It is a bit bare-bones right now; if you want to help improve it, I’ve opened issues #207 and #208 for some improvements that shouldn’t be too hard to do.

The debugger is based on Jorendb, a toy debugger by Jason Orendorff which is included in the SpiderMonkey source repository as an example of the Debugger API.

Performance improvements

We’ve made some good improvements in performance, which should be especially apparent in GNOME Shell. The biggest improvement is the Big Hammer patch by Georges Stavracas, which should stop your GNOME Shell session from holding on to hundreds of megabytes at a time. It’s a mitigation of the Tardy Sweep problem which is explained in detail by Georges here. Unfortunately, it makes a tradeoff of worse CPU usage in exchange for better memory usage. We are still trying to find a more permanent solution. Carlos Garnacho also made some further improvements to this patch during the 3.30 cycle.

The other prominent improvement is better memory usage for GObjects in general. A typical GNOME Shell run contains thousands or maybe ten-thousands of GObjects, so shaving even a few bytes off per object has a noticeable effect. Carlos Garnacho started some work in this direction and I continued it. In the end we went from 128 bytes per GObject to 88 bytes. In both cases there is an extra 32 byte penalty if the object has any state set on it from JavaScript code. With these changes, GNOME Shell uses several tens of megabytes less memory before you even do anything.

I have opened two issues for further investigation, #175 and #176. These are two likely avenues to reduce the memory usage even more, and it would be great if someone were interested to work on them. If they are successful, it’s likely we could get the memory usage down to 56 bytes per GObject, and eliminate the extra 32 byte penalty.

Eventually we will get to that “well-oiled machine” state… Photo by Celine Nadon on Unsplash

Developer Experience

I keep insisting it’s no coincidence, that as soon as we switched to GitLab we started seeing an uptick in contributors whom we hadn’t seen before. This trend has continued: we merged patches from 22 active contributors to GJS in this cycle, up from 13 last time.

Claudio André landed many improvements to the GitLab CI. For one thing, the program is now built and tested on more platforms and using more compile options. He also spent a lot of effort ensuring that the most common failures will fail quickly, so that developers get feedback quickly.

From my side, the maintainer tasks have gotten a lot simpler with GitLab. When I review a merge request, I can leave the questions of “does it build?” and “are all the semicolons there?” to the CI, and concentrate on the more important questions of “is this a feature we want?” and “is it implemented in the best way?” The thumbs-up votey things on issues and merge requests also provide a bit of an indication of what people would most like to see worked on, although I am not really using these systematically yet.

We have some improvements soon to be deployed to DevDocs, and GJS Guide, a site explaining some of the more basic GJS concepts. Both of these were the subject of Evan Welsh’s summer internship. Evan did a lot of work in upstream DevDocs, porting it from the current unsupported CoffeeScript version to a more modern web development stack, which will hopefully be merged upstream eventually.

It’s about time we had a signpost to point the way in GJS. Photo by Jens Johnsson on Pexels.com

We also have an auto formatter for C++ code, so if you contribute code, it’s easier to avoid your branches failing CI due to linter errors. You can set it up so that it will correct your code style every time you commit; there are instructions in the Hacking file. It uses Marco Barisione’s clang-format-hooks. The process isn’t infallible, though: the CI job uses cpplint and the auto formatter uses clang-format, and the two are not 100% compatible.

There are a few miscellaneous nice things that Claudio made. The test coverage report for the master branch is automatically published on every push. And if you want to try out the latest GJS interpreter in a Flatpak, you can manually trigger the “flatpak” CI job and download one.

What’s coming in 3.32

There are a number of efforts already underway in the 3.32 cycle.

ES6 modules should be able to land! This is an often requested feature and John Renner has a mostly-working implementation already. You can follow along on the merge request.

Avi Zajac is working on the full version of the async Promises feature, both the gobject-introspection and GJS parts, which will make it no longer opt-in; Promises will “just work” with all GIO-based async operations.

Also related to async and promises, Florian Müllner is working on a new API that will simplify calling DBus interfaces using some of the new ES6 features we have gained in recent releases.

I hope to land Giovanni Campagna’s old “argument cache” patch set, which looks like it will speed up calls from JS into C by quite a lot. Apparently there is a similar argument cache in PyGObject.

Finally, and this will be the subject of a separate blog post coming soon, I think we have a plausible solution to the Tardy Sweep problem! I’m really excited to write about this, as the solution is really ingenious (I can say that, because I did not think of it myself…)

Contributors

Thanks to everyone who participated to bring GJS to GNOME 3.30: Andy Holmes, Avi Zajac, Carlos Garnacho, Christopher Wheeldon, Claudio André, Cosimo Cecchi, Emmanuele Bassi, Evan Welsh, Florian Müllner, Georges Basile Stavracas Neto, James Cowgill, Jason Hicks, Karen Medina, Ole Jørgen Brønner, pixunil, Seth Woodworth, Simon McVittie, Tomasz Miąsko, and William Barath.

As well, this release incorporated some old patches that people contributed in the past, even up to 10 years ago, that were never merged because they needed some tweaks here or there. Thanks to those people for participating in the past, and I’m glad we were finally able to land your contributions: Giovanni Campagna, Jesus Bermudez Velazquez, Sam Spilsbury, and Tommi Komulainen.

Meg Ford: LAS GNOME

Planet GNOME - Hën, 24/09/2018 - 2:59pd
The 2018 edition of the LAS GNOME conference happened two weeks ago. I arrived in time for the second day of talks, and left early Sunday.

The conference was small but the group was energized and the talks were engaging. The group was made up of local GNOMErs, developers and designers from the US free software community, developers from KDE, and local students, among others. I was very impressed by the hard work of the volunteers. The weather in Denver was very nice. The venue was a beautiful old mansion situated close to downtown.

 A few of my favorite talks:

It was interesting to hear Aleix Pol's presentation on KDE's approach to integrating Flatpak, Snap, and Packagekit backends into their software center.

Britt Yazel's talk on Research Science and Libre Computing was very thought provoking. He talked about the enormous cost of using proprietary software and the lack of reproducibility of research outcomes due to bugs in software and unknown testing environments. It was fascinating to see the parallels between challenges software engineers themselves face in setting up production and test environments, and those faced by research scientists.

Heidi Ellis and Gregory Hislop's talk, "How Can You Make Your Open Source Project Attractive to Students?" outlined the challenges university professors face in trying to teach open source in the classroom, and how projects can make it easier. It was nice to see that GNOME's newcomers' initiatives already provides many of the necessary things: contact information for mentors, places for newcomers to ask questions, documentation on how to get started, etc.

Amisha Singla's talk on "Guarding the Maps from Vandals" explored the evolution of MapBox's approaches to detecting vandalism. They started with a rules-based approach and human review, and eventually re-wrote their system to use natural language processing and machine learning approaches.

Thanks to all the volunteers whose hard work made the event possible! Hope to see you all again next year.


Bastian Ilsø Hougaard: Developer Center Initiative – Meeting Summary 21st September

Planet GNOME - Dje, 23/09/2018 - 10:17md

Since last blog post there’s been two Developer Center meetings held in coordination with LAS GNOME Sunday the 9th September and again Friday the 21st September. Unfortunately I couldn’t attend the LAS GNOME meeting, but I’ll cover the general progress made here.

Test Instances Status

In the previous meetings we have been evaluating 4 possible technologies namely Sphinx, Django, Vuepress and HotDoc. Since then, the progress made in the development of these proposals has varied considerably. We got feedback from Christoph Reiter on the feasibility of using Sphinx and currently there are no efforts going towards making a test instance here. Michael was unsure he could commit the time to the Django proposal and suggests instead either Vuepress or HotDoc. For this reason the Sphinx and Django proposals have been closed off for now.

HotDoc has lately seen a lot of development by Matthieu and Thiblahute. A rough port of the Mallard-based gnome-devel-docs was demonstrated at the LAS GNOME call, so you can now for example find the Human Interface Guidelines in Markdown. Of course, there is still a long way to go, but this is a good first milestone to reach and HotDoc is the first of all the test instances to reach it. Matthieu also gave answers to criterias formulated in my previous blog post.

The main concern of HotDoc has been maintainability and the general small scale of the community surrounding it. On the other hand, Evan appears to be busy and Vuepress haven’t received attention since its initial proposal. As the choice narrows, we intend to give the test instances a last small window of time to gain activity. Simultaneously we have started to focus the short-term future efforts on improving the HotDoc test instance with Matthieu and Thiblahute.

Initial Content-plan

The second item discussed at the meeting was an initial content plan. Prior to the meeting I worked out how this content plan could look like based on Allan’s initial design. This is a summary of the proposed short-term plan:

  • The API Reference will explorable through the current gtk-doc static HTML and External API references will be linked where relevant.
  • The HIG will be ported to Markdown and maintenance from there continues in Markdown, see next bullet.
  • The tutorials section would consist of hand-ported GNOME Wiki HowDoIs and auto-ported GNOME Devel Docs. The GNOME Devel Docs repository would be ported at once to Markdown and reviewed. An announcement to the GNOME Docs mailing list when this happens. From that point on, documentation writers would be encouraged to continue edits directly through the new test instance.
  • The Distribute section will initially link to Flatpak’s Developer Documentation.
  • The Technologies overview will link to the corresponding GNOME.org page.
  • The Get Involved page will link to the GNOME Newcomer Guide on the GNOME Wiki.
  • Finally there is the GNOME Development Guide section, but this I would personally rather propose to merge with Tutorials.

There are a lot more question marks and wish thinking concerning the long-term plan, but you can read and comment on both short-term and long-term content plans in the Gitlab issue.

Next Steps

I will soon open a new framadate for a Developer Center meeting. For those interested in helping with the HotDoc test instance, feel free to file issues against it or join the discussion in the HotDoc Instance proposal. Personally I will try to get HotDoc running locally on my machine and review the current site structure so it matches closer to Allan’s proposal. I will also try to help Thiblahute with writing a migration guide from GtkDoc to HotDoc.

Reviewing the ported GNOME Devel Docs material itself is still too early, but if you would like to contribute in other ways, let us know!

Patrick Griffis: LAS 2018

Planet GNOME - Dje, 23/09/2018 - 6:00pd

This month I was at my second Libre Application Summit in Denver. A smaller event than GUADEC but personally was my favorite conference so far.

One of the main goals of LAS has been to be a place for multiple platforms to discuss the desktop space and not just be a GNOME event. This year two KDE members, @aleixpol and Albert Astals Cid, who spoke about release cycle of KDE Applications, Plasma, and the history of Qt. It is always interesting to see how another project solves the same problems and where there is overlap.

The elementary folks were there since this is @cassidyjames home turf who had a great “It’s Not Always Techincal” talk as well as a talk with @danrabbit about AppCenter which are both very important areas the GNOME Project needs to improve in. I also enjoyed meeting a few other community members such as @Philip-Scott and talk about their use of elementary’s platform.

Heather from Purism spoke about the Librem 5 status which I’m excited for but has a way to go. It was great to get an opportunity to meet her since we’ve spoken online about their interest in Flatpak and GNOME-Builder.

There were some fantastic talks discussing FOSS usage at a broader level:

As always there was a big Flatpak presense and throughout we had the opportunity to discuss things like adding Qt to fdo, tracking runtime CVEs, sandboxing WebKitGTK, etc. We also had a Flatpak BoF on the last day discussing things like possibilty of selling apps and infrastructure improvements.

I really enjoyed the event overall and look forward to future LASes. Next week I will be in A Coruña, Spain, for the webengine hackfest.

Faqet

Subscribe to AlbLinux agreguesi