You are here

Planet Debian

Subscribe to Feed Planet Debian
liw's English language blog feed rebel with rather too many causes Any sufficiently advanced thinking is indistinguishable from madness ganbatte kudasai! random musings and comments Politics, tech and herding cats from Neil McGovern Reproducible builds blog Any sufficiently advanced thinking is indistinguishable from madness Entries tagged english Thinking inside the box Random thoughts about everything tagged by Debian random musings and comments A blog about technology and stuff related Comments on family, technology, and society random musings and comments Entries tagged english Thinking inside the box Personal Website random musings and comments jmtd random musings and comments All blogs related to Debian Dude! Sweet! showing latest 10 As time goes by ... Thinking inside the box anarcat Reproducible builds blog Ben Hutchings's diary of life and technology Blog Free Software Hacking pabs A geek girl's random thoughts about KDE, KDE Neon, Debian, Linux, and travel Thinking inside the box Thinking inside the box Entries tagged english Debian and Free Software An informal account of my adventures in coding and Free Software world. I intend to cover a great variety of themes. spwhitton spwhitton rebel with rather too many causes Reproducible builds blog "Passion and dispassion. Choose two." -- Larry Wall anarcat Entries tagged english
Përditësimi: 4 javë 18 orë më parë

Lookalikes

Dje, 16/09/2018 - 8:18md

Was my festive shirt the model for the men’s room signs at Daniel K. Inouye International Airport in Honolulu? Did I see the sign on arrival and subconsciously decide to dress similarly when I returned to the airport to depart Hawaii?

Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

GIMP 2.10

Dje, 16/09/2018 - 6:00pd

GIMP 2.10 landed in Debian Testing a few weeks ago and I have to say I'm very happy about it. The last major version of GIMP (2.8) was released in 2012 and the new version fixes a lot of bugs and improved the user interface.

I've updated my Beginner's Guide to GIMP (sadly only in French) and in the process I found out a few things I thought I would share:

Theme

The default theme is Dark. Although it looks very nice in my opinion, I don't feel it's a good choice for productivity. The icon pack the theme uses is a monochrome flat 2D render and I feel it makes it hard to differentiate the icons from one another.

I would instead recommend on using the Light theme with the Color icon pack.

Single Window Mode

GIMP now enables Single Window Mode by default. That means that Dockable Dialog Windows like the Toolbar or the Layer Window cannot be moved around, but instead are locked to two docks on the right and the left of the screen.

Although you can hide and show these docks using Tab, I feel Single Window Mode is more suitable for larger screens. On my laptop, I still prefer moving the windows around as I used to do in 2.8.

You can disable Single Window Mode in the Windows tab.

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Two days afterward

Dje, 16/09/2018 - 2:03pd

Sheena plodded down the stairs barefoot, her shiny bunions glinting in the cheap fluorescent light. “My boobs hurt,” she announced.

“That happens every month,” mumbled Luke, not looking up from his newspaper.

“It does not!” she retorted. “I think I'm perimenopausal.”

“At age 29?” he asked skeptically.

“Don't mansplain perimenopause to me!” she shouted.

“Okay,” he said, putting down the paper and walking over to embrace her.

“My boobs hurt,” she whispered.

Posted on 2018-09-16 Tags: mintings C https://xana.scru.org Yammering

Backing the wrong horse?

Sht, 15/09/2018 - 2:13md

I started using the Ruby programming in around 2003 or 2004, but stopped at some point later, perhaps around 2008. At the time I was frustrated with the approach the Ruby community took for managing packages of Ruby software: Ruby Gems. They interact really badly with distribution packaging and made the jobs of organisations like Debian more difficult. This was around the time that Ruby on Rails was making a big splash for web application development (I think version 2.0 had just come out). I did fork out for the predominant Ruby on Rails book to try it out. Unfortunately the software was evolving so quickly that the very first examples in the book no longer worked with the latest versions of Rails. I wasn't doing a lot of web development that at the time anyway, so I put the book, Rails and Ruby itself on the shelf and moved on to looking at the Python programming language instead.

Since then I've written lots of Python, both professionally and personally. Whenever it looked like a job was best solved with scripting, I'd pick up Python. I hadn't stopped to reflect on the experience much at all, beyond being glad I wasn't writing Perl any more (the first language I had any real traction with, 20 years ago).

I'm still writing Python on most work days, and there are bits of it that I do really like, but there are also aspects I really don't. Some of the stuff I work on needs to work in both Python 2 and 3, and that can be painful. The whole 2-versus-3 situation is awkward: I'd much rather just focus on 3, but Python 3 didn't ship in (at least) RHEL 7, although it looks like it will in 8.

Recently I dusted off some 12-year old Ruby code and had a pleasant experience interacting with Ruby again. It made me wonder, had I perhaps backed the wrong horse? In some respects, clearly not: being proficient with Python was immediately helpful when I started my current job (and may have had a hand in getting me hired). But in other respects, I wonder how much time I've wasted wrestling with e.g. Python's verbose, rigid regular expression library when Ruby has nice language-native regular expression operators (taken straight from Perl), or the really awkward support for Unicode in Python 2 (this reminds me of Perl for all the wrong reasons)

Next time I have a computing problem to solve where it looks like a script is the right approach, I'm going to give Ruby another try. Assuming I don't go for Haskell instead, of course. Or, perhaps I should try something completely different? One piece of advice that resonated with me from the excellent book The Pragmatic Programmer was "Learn a new (programming) language every year". It was only recently that I reflected that I haven't learned a completely new language for a very long time. I tried Go in 2013 but my attempt petered out. Should I pick that back up? It has a lot of traction in the stuff I do in my day job (Kubernetes, Docker, Openshift, etc.). "Rust" looks interesting, but a bit impenetrable at first glance. Idris? Lua? Something else?

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Recommendations for software?

Sht, 15/09/2018 - 11:01pd

A quick post with two questions:

  • What spam-filtering software do you recommend?
  • Is there a PAM module for testing with HaveIBeenPwnd?
    • If not would you sponsor me to write it? ;)

So I've been using crm114 to perform spam-filtering on my incoming mail, via procmail, for the past few years.

Today I discovered it had archived about 12Gb of my email history, because I'd never pruned it. (Beneath ~/.crm/.)

So I wonder if there are better/simpler/different Bayesian-filters out there at that I should be switching to? Recommendations welcome - but don't say "SpamAssassin", thanks!

Secondly the excellent Have I Been Pwned site provides an API which allows you to test if a password has been previously included in a leak. This is great, and I've integrated their API in a couple of my own applications, but I was thinking on the bus home tonight it might be worth tying into PAM.

Sure in the interests of security people should use key-based authentication for SSH, but .. most people don't. Even so, if keys are used exclusively, a PAM module would allow you to validate the password which is used for sudo hasn't previously been leaked.

So it seems like there is value in a PAM module to do a lookup at authentication-time, via libcurl.

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Autobuilding Debian packages on salsa with Gitlab CI

Pre, 14/09/2018 - 4:45md

Now that Debian has migrated away from alioth and towards a gitlab instance known as salsa, we get a pretty advanced Continuous Integration system for (almost) free. Having that, it might make sense to use that setup to autobuild and -test a package when committing something. I had a look at doing so for one of my packages, ola; the reason I chose that package is because it comes with an autopkgtest, so that makes testing it slightly easier (even if the autopkgtest is far from complete).

Gitlab CI is configured through a .gitlab-ci.yml file, which supports many options and may therefore be a bit complicated for first-time users. Since I've worked with it before, I understand how it works, so I thought it might be useful to show people how you can do things.

First, let's look at the .gitlab-ci.yml file which I wrote for the ola package:

stages: - build - autopkgtest .build: &build before_script: - apt-get update - apt-get -y install devscripts adduser fakeroot sudo - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r - adduser --disabled-password --gecos "" builduser - chown -R builduser:builduser . - chown builduser:builduser .. stage: build artifacts: paths: - built script: - sudo -u builduser dpkg-buildpackage -b -rfakeroot after_script: - mkdir built - dcmd mv ../*ges built/ .test: &test before_script: - apt-get update - apt-get -y install autopkgtest stage: autopkgtest script: - autopkgtest built/*ges -- null build:testing: <<: *build image: debian:testing build:unstable: <<: *build image: debian:sid test:testing: <<: *test dependencies: - build:testing image: debian:testing test:unstable: <<: *test dependencies: - build:unstable image: debian:sid

That's a bit much. How does it work?

Let's look at every individual toplevel key in the .gitlab-ci.yml file:

stages: - build - autopkgtest

Gitlab CI has a "stages" feature. A stage can have multiple jobs, which will run in parallel, and gitlab CI won't proceed to the next stage unless and until all the jobs in the last stage have finished. Jobs from one stage can use files from a previous stage by way of the "artifacts" or "cache" features (which we'll get to later). However, in order to be able to use the stages feature, you have to create stages first. That's what we do here.

.build: &build before_script: - apt-get update - apt-get -y install devscripts autoconf automake adduser fakeroot sudo - autoreconf -f -i - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r - adduser --disabled-password --gecos "" builduser - chown -R builduser:builduser . - chown builduser:builduser .. stage: build artifacts: paths: - built script: - sudo -u builduser dpkg-buildpackage -b -rfakeroot after_script: - mkdir built - dcmd mv ../*ges built/

This tells gitlab CI what to do when building the ola package. The main bit is the script: key in this template: it essentially tells gitlab CI to run dpkg-buildpackage. However, before we can do so, we need to install all the build-dependencies and a few helper things, as well as create a non-root user (since ola refuses to be built as root). This we do in the before_script: key. Finally, once the packages have been built, we create a built directory, and use devscripts' dcmd to move the output of the dpkg-buildpackage command into the built directory.

Note that the name of this key starts with a dot. This signals to gitlab CI that it is a "hidden" job, which it should not start by default. Additionally, we create an anchor (the &build at the end of that line) that we can refer to later. This makes it a job template, not a job itself, that we can reuse if we want to.

The reason we split up the script to be run into three different scripts (before_script, script, and after_script) is simply so that gitlab can understand the difference between "something is wrong with this commit" and "we failed to even configure the build system". It's not strictly necessary, but I find it helpful.

Since we configured the built directory as the artifacts path, gitlab will do two things:

  • First, it will create a .zip file in gitlab, which allows you to download the packages from the gitlab webinterface (and inspect them if needs be). The length of time for which the artifacts are stored can be configured by way of the artifacts:expire_in key; if not set, it defaults to 30 days or whatever the salsa maintainers have configured (of which I'm not sure what it is)
  • Second, it will make the artifacts available in the same location on jobs in the next stage.

The first can be avoided by using the cache feature rather than the artifacts one, if preferred.

.test: &test before_script: - apt-get update - apt-get -y install autopkgtest stage: autopkgtest script: - autopkgtest built/*ges -- null

This is very similar to the build template that we had before, except that it sets up and runs autopkgtest rather than dpkg-buildpackage, and that it does so in the autopkgtest stage rather than the build one, but there's nothing new here.

build:testing: <<: *build image: debian:testing build:unstable: <<: *build image: debian:sid

These two use the build template that we defined before. This is done by way of the <<: *build line, which is YAML-ese to say "inject the other template here". In addition, we add extra configuration -- in this case, we simply state that we want to build inside the debian:testing docker image in the build:testing job, and inside the debian:sid docker image in the build:unstable job.

test:testing: <<: *test dependencies: - build:testing image: debian:testing test:unstable: <<: *test dependencies: - build:unstable image: debian:sid

This is almost the same as the build:testing and the build:unstable jobs, except that:

  • We instantiate the test template, not the build one;
  • We say that the test:testing job depends on the build:testing one. This does not cause the job to start before the end of the previous stage (that is not possible); instead, it tells gitlab that the artifacts created in the build:testing job should be copied into the test:testing working directory. Without this line, all artifacts from all jobs from the previous stage would be copied, which in this case would create file conflicts (since the files from the build:testing job have the same name as the ones from the build:unstable one).

It is also possible to run autopkgtest in the same image in which the build was done. However, the downside of doing that is that if one of your built packages lacks a dependency that is an indirect dependency of one of your build dependencies, you won't notice; by blowing away the docker container in which the package was built and running autopkgtest in a pristine container, we avoid this issue.

With that, you have a complete working example of how to do continuous integration for Debian packaging. To see it work in practice, you might want to look at the ola version

UPDATE (2018-09-16): dropped the autoreconf call, isn't needed (it was there because it didn't work from the first go, and I thought that might have been related, but that turned out to be a red herring, and I forgot to drop it)

Wouter Verhelst https://grep.be/blog//pd/ pd

New website for vmdb2

Pre, 14/09/2018 - 3:00md

I've set up a new website for vmdb2, my tool for building Debian images (basically "debootstrap, except in a disk image"). As usual for my websites, it's ugly. Feedback welcome.

Lars Wirzenius' blog http://blog.liw.fi/englishfeed/ englishfeed

Gaming: Rise of the Tomb Raider

Pre, 14/09/2018 - 4:07pd

Over the last weekend I have finally finished The Rise of the Tomb Raider. As I wrote exactly 4 month ago when I started the game, I am a complete newby to these kind of games, and was blown away by the visual quality and great gameplay.

I was really surprised how huge an area I had to explore over the four month. Many of them with really excellent nature scenery, some of them with a depressingly dark and solemn atmosphere.

Another thing I learned that the Challenge Tombs – some kind of puzzle challenges – haven’t been that important in previous games. I enjoyed these puzzles much more then the fighting sequences (also because I am really bad at combat and have to die soo many times before I succeed!).

Lots of sliding down on ropes, jumping, running, diving, often into the unknown.

In the last part of the game when Lara enters into the city underneath the glacier one is reminded of the scene when Frodo tries to enter into Mordor, seeing all the dark riders and troops.

The final approach starts, there is still a long way (and many strange creatures to fight), but at least the final destination is in sight!

I finished the game with 100%, because I went back to all the areas and used Stella’s Tomb Raider Site walkthrough to help me find all the items. I think it is practically impossible within life time to find all the items alone without help. This is especially funny because in one of the trailers one of the developers mentions that they count with 15h of gameplay, and 30h if you want to finish at 100%. It took me 58h (!) to finish with 100% … and that with a walkthrough!!!

Anyway, tomorrow the Shadow of the Tomb Raider will be released, and I could also start the first game of the series, Tomb Raider, but I got a bit worn out by all the combat activity and decided to concentrate on a hard-core puzzle game, The Witness, which features loads and loads of puzzles, taught from simple to complex, and combined, to create a very interesting game. Now I only need the time …

Norbert Preining https://www.preining.info/blog There and back again

20180913-reproducible-builds-paris-meeting

Enj, 13/09/2018 - 11:54md
Reproducible Builds 2018 Paris meeting

Many lovely people interested in reproducible builds will meet again at a three-day event in Paris we will welcome both previous attendees and new projects alike! We hope to discuss, connect and exchange ideas in order to grow the reproducible builds effort and we would be delighted if you'd join! And this is the space we'll bring into life:

And whilst the exact content of the meeting will be shaped by the participants when we do it, the main goals will include:

  • Updating & exchanging the status of reproducible builds in various projects.
  • Improving collaboration both between and inside projects.
  • Expanding the scope and reach of reproducible builds to more projects.
  • Working and hacking together on solutions.
  • Brainstorming designs for tools enabling end-users to get the most benefits from reproducible builds.
  • Discussing how reproducible builds will be usable and meaningful to users and developers alike.

Please reach out if you'd like to participate in hopefully interesting, inspiring and intense technical sessions about reproducible builds and beyond!

Holger Levsen http://layer-acht.org/thinking/ Any sufficiently advanced thinking is indistinguishable from madness

What is the difference between moderation and censorship?

Enj, 13/09/2018 - 11:09md

FSFE fellows recently started discussing my blog posts about Who were the fellowship? and An FSFE Fellowship Representative's dilemma.

Fellows making posts in support of reform have reported their emails were rejected. Some fellows had CC'd me on their posts to the list and these posts never appeared publicly. These are some examples of responses received by a fellow trying to post on the list:

The list moderation team decided now to put your email address on moderation for one month. This is not censorship.

One fellow forwarded me a rejected message to look at. It isn't obscene, doesn't attack anybody and doesn't violate the code of conduct. The fellow writes:

+1 for somebody to answer the original questions with real answers
-1 for more character assassination

Censors moderators responded to that fellow:

This message is in constructive and unsuited for a public discussion list.

Why would moderators block something like that? In the same thread, they allowed some very personal attack messages in favour of existing management.

Moderation + Bias = Censorship

Even links to the public list archives are giving errors and people are joking that they will only work again after the censors PR team change all the previous emails to comply with the censorship communications policy exposed in my last blog.

Fellows have started noticing that the blog of their representative is not being syndicated on Planet FSFE any more.

Some people complained that my last blog didn't provide evidence to justify my concerns about censorship. I'd like to thank FSFE management for helping me respond to that concern so conclusively with these heavy-handed actions against the community over the last 48 hours.

The collapse of the fellowship described in my earlier blog has been caused by FSFE management decisions. The solutions need to come from the grass roots. A totalitarian crackdown on all communications is a great way to make sure that never happens.

FSFE claims to be a representative of the free software community in Europe. Does this behaviour reflect how other communities operate? How successful would other communities be if they suffocated ideas in this manner?

This is what people see right now trying to follow links to the main FSFE Discussion list archive:

Daniel.Pocock https://danielpocock.com/tags/debian DanielPocock.com - debian

Debian LTS work, August 2018

Enj, 13/09/2018 - 1:54md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 8 hours from July. I worked only 5 hours and therefore carried over 18 hours to September.

I prepared and uploaded updates to the linux-4.9 (DLA 1466-1, DLA 1481-1) and linux-latest-4.9 packages.

Ben Hutchings https://www.decadent.org.uk/ben/blog Better living through software

TeX Live contrib updates

Enj, 13/09/2018 - 4:56pd

It is now more than a year that I took over tlcontrib from Taco and provide it at the TeX Live contrib repository. It does now serve old TeX Live 2017 as well as the current TeX Live 2018, and since last year the number of packages has increased from 52 to 70.

Recent changes include pTeX support packages for non-free fonts and more packages from the AcroTeX bundle. In particular since the last post the following packages have been added: aeb-mobile, aeb-tilebg, aebenvelope, cjk-gs-integrate-macos, comicsans, datepicker-pro, digicap-pro, dps, eq-save, fetchbibpes, japanese-otf-nonfree, japanese-otf-uptex-nonfree, mkstmpdad, opacity-pro, ptex-fontmaps-macos, qrcstamps.

Here I want to thank Jürgen Gilg for reminding me consistently of updates I have missed, big thanks!

To recall what TLcontrib is for: It collects packages that not distributed inside TeX Live proper for one or another of the following reasons:

  • because it is not free software according to the FSF guidelines;
  • because it is an executable update;
  • because it is not available on CTAN;
  • because it is an intermediate release for testing.

In short, anything related to TeX that can not be on TeX Live but can still legally be distributed over the Internet can have a place on TLContrib. The full list of packages can be seen here.

Please see the main page for Quickstart, History, and details about how to contribute packages.

Last but not least, while this is a service to make access to non-free packages more easy for users of the TeX Live Manager, our aim is to have as many as possible packages made completely free and included into TeX Live proper!

Enjoy.

Norbert Preining https://www.preining.info/blog There and back again

digest 0.6.17

Enj, 13/09/2018 - 2:06pd

digest version 0.6.17 arrived on CRAN earlier today after a day of gestation in the bowels of CRAN, and should get uploaded to Debian in due course.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64 and murmur32 algorithms) permitting easy comparison of R language objects.

This release brings another robustifications thanks to Radford Neal who noticed a segfault in 32 bit mode on Sparc running Solaris. Yay for esoteric setups. But thanks to his very nice pull request, this is taken care of, and it also squashed one UBSAN error under the standard gcc setup. But two files remain with UBSAN issues, help would be welcome!

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Disappointment on the new commute

Mër, 12/09/2018 - 6:53md

Imagine my disappointment when I discovered that signs on Stanford’s campus pointing to their “Enchanted Broccoli Forest” and “Narnia”—both of which that I have been passing daily on my new commute—merely indicate the location of student living groups with whimsical names.

Benjamin Mako Hill https://mako.cc/copyrighteous copyrighteous

Distributing static routes with DHCP

Mër, 12/09/2018 - 10:00pd

This week I had to deal with a setup in which I needed to distribute additional static network routes using DHCP.

The setup is easy but there are some caveats to take into account. Also, DHCP clients might not behave as one would expect.

The starting situation was a working DHCP clinet/server deployment. Some standard virtual machines would request for their network setup over the network. Nothing new. The DHCP server is dnsmasq, and the daemon is running under Openstack control, but this has nothing to do with the DHCP problem itself.

By default, it seems dnsmasq sends to clients the Routers (code 3) option, which usually contains the gateway for clients in the subnet to use. My situation required to distribute one additional static route for another subnet. My idea was for DHCP clients to end with this simple routing table:

user@dhcpclient:~$ ip r default via 10.0.0.1 dev eth0 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.100 172.16.0.0/21 via 10.0.0.253 dev eth0 <--- extra static route

To distribute this extra static route, you only need to edit the dnsmasq config file and add a line like this:

dhcp-option=option:classless-static-route,172.16.0.0/21,10.0.0.253

For my initial tests of this config I was simply doing requesting to refresh the lease from the client DHCP. This got my new static route online, but if in the case of a reboot, the client DHCP would not get the default route. The different behaviour is documented in dhclient-script(8).

To try something similar to a reboot situation, I had to use this command:

user@dhcpclient:~$ sudo ifup --force eth0 Internet Systems Consortium DHCP Client 4.3.1 Copyright 2004-2014 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Listening on LPF/eth0/xx:xx:xx:xx:xx:xx Sending on LPF/eth0/xx:xx:xx:xx:xx:xx Sending on Socket/fallback DHCPREQUEST on eth0 to 255.255.255.255 port 67 DHCPACK from 10.0.0.1 RTNETLINK answers: File exists bound to 10.0.0.100 -- renewal in 20284 seconds.

Anyway this was really surprissing at first, and led me to debug DHCP packets using dhcpdump:

TIME: 2018-09-11 18:06:03.496 IP: 10.0.0.1 (xx:xx:xx:xx:xx:xx) > 10.0.0.100 (xx:xx:xx:xx:xx:xx) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: xxxxxxxx SECS: 8 FLAGS: 0 CIADDR: 0.0.0.0 YIADDR: 10.0.0.100 SIADDR: xx.xx.xx.x GIADDR: 0.0.0.0 CHADDR: xx:xx:xx:xx:xx:xx:00:00:00:00:00:00:00:00:00:00 OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier 10.0.0.1 OPTION: 51 ( 4) IP address leasetime 43200 (12h) OPTION: 58 ( 4) T1 21600 (6h) OPTION: 59 ( 4) T2 37800 (10h30m) OPTION: 1 ( 4) Subnet mask 255.255.255.0 OPTION: 28 ( 4) Broadcast address 10.0.0.255 OPTION: 15 ( 13) Domainname xxxxxxxx OPTION: 12 ( 21) Host name xxxxxxxx OPTION: 3 ( 4) Routers 10.0.0.1 OPTION: 121 ( 8) Classless Static Route xxxxxxxxxxxxxx .....D.. [...] ---------------------------------------------------------------------------

(you can use this handy command both in server and client side)

So, the DHCP server was sending both the Routers (code 3) and the Classless Static Route (code 121) options to the clients. So, why would fail the client to install both routes?

I obtained some help from folks on IRC and they pointed me towards RFC3442:

DHCP Client Behavior [...] If the DHCP server returns both a Classless Static Routes option and a Router option, the DHCP client MUST ignore the Router option.

So, clients are supposed to ignore the Routers (code 3) option if they get an additional static route. This is very counter-intuitive, but can be easily workarounded by just distributing the default gateway route as another classless static route:

dhcp-option=option:classless-static-route,0.0.0.0/0,10.0.0.1,172.16.0.0/21,10.0.0.253 # ^^ default route ^^ extra static route

Obviously this was my first time in my career dealing with this setup and situation. My conclussion is that even old-enough protocols like DHCP can sometimes behave in a counter-intuitive way. Reading RFCs is not always funny, but can help understand what’s going on.

You can read the original issue in Wikimedia Foundation’s Phabricator ticket T202636, including all the back-and-forth work I did. Yes, is open to the public ;-)

Arturo Borrero González http://ral-arturo.org/ ral-arturo.org

o-tour 2018 (Halbmarathon)

Mër, 12/09/2018 - 8:51pd

My first race redo at the same distance/ascent meters. Let’s see how it went… 45.2km, 1’773m altitude gain (officially: 45km, 1’800m). This was the Halbmarathon distance, compared to the full Marathon one, which is 86km/3’000m.

Pre-race

I registered for this race right after my previous one, and despite it having much more altitude meters, I was looking forward to it.

That is, until the week of the race. The entire week was just off. Work life, personal life, everything seemed out of sync. Including a half-sleepless night on Wednesday, which ruined my sleep schedule for the rest of the week and also my plans for the light maintenance rides before the event. And which also made me feel half-sick due to lack of sleep.

I prepared for my ride on Saturday (bike check, tyre pressure check, load bike on car), and I went to bed—late, again difficult to fall asleep—not being sure I’ll actually go to the race. I had a difficult night sleep, but actually I managed to wake up on the alarm. OK, chances looking somewhat better for getting to the race. Total sleep: 5 hours. Ouch!

So I get in the car—about 15 minutes later than planned—and start, only to find a road closure on the most direct route to the highway, and police people directing the traffic—at around 07:10—on the “new” route, resulting in yet another detour, and me getting stressed enough on which way to go and not paying attention to exact speed on a downhill that I got a flashed by a speed camera. Sigh…

The rest of the drive was uneventful, I reach Alpnach, I park, I get to the start/finish location, get my number, and finally get to the start line with two minutes (!!) to spare. The most “just-in-time” I ever was at a race, as I’m usually way early. By this time I was even in a later starting block since mine was already setup and would have been difficult to reach.

Oh, and because I was so late, and because this is smaller race (number of participants, setup, etc.), I didn’t find a place to fill my water bottle. And this, for the one time I didn’t fill it in advance. Fun!

The race

So given all this, I set low expectations for the race, and decided to consider it a simple Sunday ride. Will take it easy on the initial 12.5km, 1’150m climb, and then will see how it goes. There was a food station two thirds in the climb, so I said I’ll hopefully not get too dehydrated until then.

The climb starts relaxed-I was among the last people starting—and 15 minutes in, my friend the lower back says “ah, you’re climbing again, remember I’m here too”, which was way too early. So, I said to myself, nothing to lose, let’s just switch to standing every time my back gets tired, and stand until my legs get tired, then switch again.

The climb here was on pavement, so standing was pretty easy. And, to my surprise, this worked quite well: while standing I also went much faster (by much, I mean probably ~2-3km/h) than sitting so I was advancing in the long stretch of people going up the mountain, and my back was very relieved every time I switched.

So, up and down and up and down in the saddle, and up and up and up on the mountain, until I get to the food station. Water! I quickly drink an entire bottle (750ml!!), refill, and go on.

After the food station, the route changed to gravel, and this made pedalling while standing more difficult, due to less grip and slipping if you’re not careful. I tried the sit/stand/sit routine, but it was getting more difficult, so I went, slower, until a point I had to stop. I was by now in the sun, hot, and tired. And annoyed at the low volume out of the water bottle, so I opened it, and drank just like from a glass, and emptied it quickly - yet again! I felt much better, and restarted pedalling, eager to get to the top.

The last part of the climb is quite steep and more or less on a trail, so here I was pushing the bike, but since I didn’t have any goals did not feel guilty about it. Up and up, and finally I reach the top (altitude: 1’633m, elevation gained: 1’148m out of ~1’800m), and I can breathe easier knowing that the most difficult part is over.

From here, it was finally a good race. The o-tour route is much more beautiful than I remembered, but also more technically difficult, to the point of being quite annoying: it runs for long stretches on very uneven artificial paths, like if someone built a paved road, but the goal was to have the most uneven surface, all rocks being at an angle, instead of aiming for an even surface. For hikers this is excellent, especially in wet conditions, but for trying to move a bike forward, or even more, forward uphill, is annoying. There were stretches of ~5% grade that I was pushing the bike, due to how annoying biking on that surface was.

The route also has nice single track sections, some easily navigable, some not, at least for me, and some that I had to carry the bike. Or even carry the bike on my shoulder while I was climbing over roots. A very nice thing, and sadly uncommon in this series of races.

One other fun aspect of the race was the mud. Especially in the forests, there was enough water left on tracks that one got splashed quite often, and outside (where the soil doesn’t have the support of the rood), less water but quite deep mud. Deep enough that at one point, I misjudged how deep the around 3 meters long mud-alike section was, and I had enough speed so that my front wheel got stuck in mud, and slowly (and I imagine gracefully as well :P, of course) I went over the handlebars in the softest mud I ever landed in. Landed, as halfway up my elbows (!), hands full of mud, gloves muddy as hell, legs down to the ankle in mud so shoes also muddy, and me finding the situation the funniest moment of the race. The guy behind me asked if everything is alright, and I almost couldn’t answer due to laughing out-loud.

Back to serious stuff now. The rest of the “meters of climbing left”, about 600+ meters, were supposed to be distributed in about 4 sections, all about the same profile except the first one which was supposed to be a bit longer and flatter. At least, that’s what the official map was showing, in a somewhat stylised way. And that’s what I based my effort dosage on.

Of course, real life is not stylised, and there 2 small climbs (as expected), and then a long and slow climb (definitely unexpected). I managed to stay on the bike, but the unexpected long climb—two kilometres—exhausted my reserves, despite being a relatively small grade (~5%, gained ~100m). I really was not planning for it, and I paid for that. Then a bit of downhill, then another short climb, another downhill—on road, 60km/h!—and then another medium-sized climb: 1km long, gaining 60m. Then a slow and long descent, a 700m/50m climb, a descent, and another climb, short but more difficult: 900m/80m (~9%). By this time, I was spent, and was really looking forward to the final descent, which I remembered was half pavement, half very nice single-track. And indeed it was superb, after all that climbing. Yay!

And then, reaching basically the end of the race (a few kilometres left), I remembered something else: this race has a climb at the end! This is where the missing climbing meters were hiding!

So, after eight kilometres of fun, 1.5km of easy climbing to gain 80m of ascent. Really trivial, a regular commute almost, but for me at this stage, it was painful and the most annoying thing ever…

And then, reaching the final two kilometres of light descent on paved road, and finishing the race. Yay!

Overall, given the way the week went, this race was much easier than I hoped, and quite enjoyable. Why? No idea. I’ll just take the bonus points and not complain ☺

Real champions

After about two minutes of me finishing, I hear the speaker saying that the second placed woman in the long distance was nearing, and that it was Esther Süss! I’ve never seen her in person as far as I know, nor any of the other leaders in these races, since usually the end times are much apart. In this case, I apparently finished between the first and second places in the women’s race (there was 3m05s difference between them). This also explained what all those photographers with telephotos at the finish line were waiting for, and why they didn’t take my picture :)))))) In any case, I was very happy to see her in person, since I’m very impressed that at 44 years old, she’s still competing and most of the time winning against other women, 10 or even 20 years younger than her. Gives a bit of hope for older people like me. Of course minus being on the thinner side (unlike me), and actually liking long climbs (unlike me), and winning (definitely unlike me). Not even bringing up the world championships gold medals, OK?

Race analysis Hydration, hydration…

As I mentioned above, I drank a lot at the beginning of the race. I continued to drink, and by 2 hours I was 3 full bottles in, at 2:40 I finished the fourth bottle.

Four bottles is 3 litres of liquid, which is way more than my usual consumption since I stopped carrying my hydration pack. In the Eiger bike challenge, done in much hotter temperatures and for longer, I think I drank about the same or only slightly more (not sure exactly). Then temperature: 19° average, 33° max, 6½ hours, this time: 16.2° average, 20° max, ~4 hours. And this time, with 4L in 4 hours, I didn’t need to run to the bathroom as I finished (at all).

The only conclusion I can make is that I sweat much more than I think, and that I must more actively drink water. I don’t want to go back to hydration pack in a race (definitely yes for more relaxed rides), so I need to use all the food stops to drink and refill.

General fitness vs. leg muscles

I know my core is weak, but it’s getting hilarious that 15 minutes into the climbing, I start getting signals. This is not happening on flat nor indoor for at least 2-2½ hours, so the conclusion is that I need to get fitter (core) and also do more outdoors real climbing training—just long (slower) climbs.

The sit-stand-sit routine was very useful, but it did result in even my hands getting tired from having to move and stabilise the bike. So again, need to get fitter overall and do more cross-training.

That is, as if I didn’t know it already ☹

Numbers

This is now beyond subjective, let’s see how the numbers look like:

  • 2016:
    • time: overall 3h49m34.4s, start-Langis 2h44m31s, Langis-finish: 1h05m02s.
    • age category: overall 70/77, start-Langis ranking: 70, Langis-finish: 72.
    • overall gender ranking: overall 251/282, start-Langis: 250, Langis-finish: 255.
  • 2018:
    • time: 3h53m43.4s, start-Langis: 2h50m11s, Langis-finish: 1h03m31s.
    • age category 70/84, start-Langis: 71, Langis-finish: 70.
    • overall gender ranking: overall 191/220, start-Langis: 191, Langis-finish: 189.

The first conclusion is that I’ve done infinitesimally better in the overall rankings: 252/282=0.893, 191/220=0.868, so better but only trivially so, especially given the large decline in participants on the short distance (the long one had the same). I cannot compare age category, because ☺

The second interesting titbit is that in 2016, I was relatively faster on the climb plus first part of the high-altitude route, and relatively slower on the second half plus descent, both in the age category and the overall category. In 2018, this reversed, and I gained places on the descent. Time comparison, ~6 minutes slower in the first half, 1m30s faster on the second one.

But I find my numbers so close that I’m surprised I neither significantly improved nor slowed down in two years. Yes, I’m not consistently training, but still… I kind of expect some larger difference, one way or another. Strava also says that I beat my 2016 numbers on 7 segments, but only got second place to that on 14 others, so again a wash.

So, weight gain aside, it seems nothing much has changed. I need to improve my consistency in training 10× probably to see a real difference. On the other hand, maybe this result is quite good, given my much less consistent training than in 2016 — ¯\_(ツ)_/¯.

Equiment-wise, I had a different bike now (full suspension vs. hardtail), and—compared to previous race two weeks ago, at least—I had the tyre pressure quite well dialled in for this event. So I was able to go fast, and indeed overtake a couple of people on the flat/light descents, and more importantly, was not overtaken by other people on the long descent. My brakes were much better as well, so I was a bit more confident, but the front brake started squeaking again when it got hot, so I need to improve this even more. But again, not even the changed equipment made much of a difference ☺

I’ll finish here with an image of my “heroic efforts”:

Not very proud of this…

I’m very surprised that they put a photographer at the top of a climb, except maybe to motivate people to pedal up the next year… I’ll try to remember this ☺

Iustin Pop https://k1024.org iustin - all posts

TensorFlow on Debian/sid (including Keras via R)

Mër, 12/09/2018 - 3:04pd

I have been struggling with getting TensorFlow running on Debian/sid for quite some time. The main problem is that the CUDA libraries installed by Debian are CUDA 9.1 based, and the precompiled pip installable TensorFlow packages require CUDA 9.0 which resulted in an unusable installation. But finally I got around and found all the pieces.

Step 1: Install CUDA 9.0

The best way I found was going to the CUDA download page, select Linux, then x86_64, then Ubuntu, then 17.04, and finally deb (network>. In the text appearing click on the download button to obtain currently cuda-repo-ubuntu1704_9.0.176-1_amd64.deb.

After installing this package as root with

dpkg -i cuda-repo-ubuntu1704_9.0.176-1_amd64.deb

the nvidia repository signing key needs to be added

apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1704/x86_64/7fa2af80.pub

and finally install the CUDA 9.0 libraries (not all of cuda-9-0 because this would create problems with the normally installed nvidia libraries):

apt-get update apt-get install cuda-libraries-9-0

This will install lots of libs into /usr/local/cuda-9.0 and add the respective directory to the ld.so path by creating a file /etc/ld.so.conf.d/cuda-9-0.conf.

Step 2: Install CUDA 9.0 CuDNN

One difficult to satisfy dependency are the CuDNN libraries. In our case we need the version 7 library for CUDA 9.0. To download these files one needs to have a NVIDIA developer account, which is quick and painless. After that go to the CuDNN page where one needs to select Download for CUDA 9.0 and then cuDNN v7.2.1 Runtime Library for Ubuntu 16.04 (Deb).

This will download a file libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb which needs to be installed with dpkg -i libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb.

Step 3: Install Tensorflow for GPU

This is the easiest one and can be done as explained on the TensorFlow installation page using

pip3 install --upgrade tensorflow-gpu

This will install several other dependencies, too.

Step 4: Check that everything works

Last but not least, make sure that TensorFlow can be loaded and find your GPU. This can be done with the following one-liner, and in my case gives the following output:

$ python3 -c "import tensorflow as tf; sess = tf.Session() ; print(tf.__version__)" 2018-09-11 16:30:27.075339: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2018-09-11 16:30:27.143265: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2018-09-11 16:30:27.143671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties: name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.4175 pciBusID: 0000:01:00.0 totalMemory: 3.94GiB freeMemory: 3.85GiB 2018-09-11 16:30:27.143702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0 2018-09-11 16:30:27.316389: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-09-11 16:30:27.316432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 2018-09-11 16:30:27.316439: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N 2018-09-11 16:30:27.316595: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3578 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) 1.10.1 $ Addendum: Keras and R

With the above settled, the installation of Keras can be done via

apt-get install python3-keras

and this should pick up the TensorFlow backend automatically.

For R there is a Keras library that can be installed without

install.packages('keras')

on the R command line (as root).

After that running a simple MNIST code example should use your GPU from R (taken from Deep Learning with R from Manning Publications):

library(keras) mnist <- dataset_mnist() train_images <- mnist$train$x train_labels <- mnist$train$y test_images <- mnist$test$x test_labels <- mnist$test$y network <- keras_model_sequential() %>% layer_dense(units = 512, activation = "relu", input_shape = c(28 * 28)) %>% layer_dense(units = 10, activation = "softmax") network %>% compile( optimizer = "rmsprop", loss = "categorical_crossentropy", metrics = c("accuracy") ) train_images <- array_reshape(train_images, c(60000, 28 * 28)) train_images <- train_images / 255 test_images <- array_reshape(test_images, c(10000, 28 * 28)) test_images <- test_images / 255 train_labels <- to_categorical(train_labels) test_labels <- to_categorical(test_labels) network %>% fit(train_images, train_labels, epochs = 5, batch_size = 128) metrics <- network %>% evaluate(test_images, test_labels) metrics Norbert Preining https://www.preining.info/blog There and back again

PSA: the.earth.li ceasing Debian mirror service

Mar, 11/09/2018 - 9:22md

This is a public service announcement that the.earth.li (the machine that hosts this blog) will cease service as a Debian mirror on 1st February 2019 at the latest.

It has already been removed from the official list of Debian mirrors. Please update your sources.list to point to an alternative sooner rather than later.

The removal has been driven by a number of factors:

  • This mirror was originally setup when I was running Black Cat Networks, and a local mirror was generally useful to us. It’s 11+ years since Black Cat was sold, and 7+ since it moved away from that network.
  • the.earth.li currently lives with Bytemark, who already have an official secondary mirror. It does not add any useful resilience to the mirror network.
  • For a long time I’ve been unable to mirror all release architectures due to disk space limitations; I think such mirrors are of limited usefulness unless located in locations with dubious connectivity to alternative full mirrors.
  • Bytemark have been acquired by IOMart and I’m uncertain as to whether my machine will remain there long term - the acquisition announcement focuses on their cloud service rather than mentioning physical server provision. Disk space requirements are one of my major costs and the Debian mirror makes up ⅔ of my current disk usage. Dropping it will make moving host easier for me, should it prove necessary.

I can’t find an exact record of when I started running a mirror, but it was certainly before April 2005. 13 years doesn’t seem like a bad length of time to have been providing the service. Personally I’ve moved to deb.debian.org but if the network location of the is the reason you chose it then mirror.bytemark.co.uk should be a good option.

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

Thinkpad X1 Carbon Gen 6

Mar, 11/09/2018 - 12:33md

In February I reviewed a Thinkpad X1 Carbon Gen 1 [1] that I bought on Ebay.

I have just been supplied the 6th Generation of the Thinkpad X1 Carbon for work, which would have cost about $1500 more than I want to pay for my own gear. ;)

The first thing to note is that it has USB-C for charging. The charger continues the trend towards smaller and lighter chargers and also allows me to charge my phone from the same charger so it’s one less charger to carry. The X1 Carbon comes with a 65W charger, but when I got a second charger it was only 45W but was also smaller and lighter.

The laptop itself is also slightly smaller in every dimension than my Gen 1 version as well as being noticeably lighter.

One thing I noticed is that the KDE power applet disappears when battery is full – maybe due to my history of buying refurbished laptops I haven’t had a battery report itself as full before.

Disabling the touch pad in the BIOS doesn’t work. This is annoying, there are 2 devices for mouse type input so I need to configure Xorg to only read from the Trackpoint.

The labels on the lid are upside down from the perspective of the person using it (but right way up for people sitting opposite them). This looks nice for observers, but means that you tend to put your laptop the wrong way around on your desk a lot before you get used to it. It is also fancier than the older model, the red LED on the cover for the dot in the I in Thinkpad is one of the minor fancy features.

As the new case is thinner than the old one (which was thin compared to most other laptops) it’s difficult to open. You can’t easily get your fingers under the lid to lift it up.

One really annoying design choice was to have a proprietary Ethernet socket with a special dongle. If the dongle is lost or damaged it will probably be expensive to replace. An extra USB socket and a USB Ethernet device would be much more useful.

The next deficiency is that it has one USB-C/DisplayPort/Thunderbolt port and 2 USB 3.1 ports. USB-C is going to be used for everything in the near future and a laptop with only a single USB-C port will be as annoying then as one with a single USB 2/3 port would be right now. Making a small laptop requires some engineering trade-offs and I can understand them limiting the number of USB 3.1 ports to save space. But having two or more USB-C ports wouldn’t have taken much space – it would take no extra space to have a USB-C port in place of the proprietary Ethernet port. It also has only a HDMI port for display, the USB-C/Thunderbolt/DisplayPort port is likely to be used for some USB-C device when you want an external display. The Lenovo advertising says “So you get Thunderbolt, USB-C, and DisplayPort all rolled into one”, but really you get “a choice of one of Thunderbolt, USB-C, or DisplayPort at any time”. How annoying would it be to disconnect your monitor because you want to read a USB-C storage device?

As an aside this might work out OK if you can have a DisplayPort monitor that also acts as a USB-C hub on the same cable. But if so requiring a monitor that isn’t even on sale now to make my laptop work properly isn’t a good strategy.

One problem I have is that resume from suspend requires holding down power button. I’m not sure if it’s hardware or software issue. But suspend on lid close works correctly and also suspend on inactivity when running on battery power. The X1 Carbon Gen 1 that I own doesn’t suspend on lid close or inactivity (due to a Linux configuration issue). So I have one laptop that won’t suspend correctly and one that won’t resume correctly.

The CPU is an i5-8250U which rates 7,678 according to cpubenchmark.net [2]. That’s 92% faster than the i7 in my personal Thinkpad and more importantly I’m likely to actually get that performance without having the CPU overheat and slow down, that said I got a thermal warning during the Debian install process which is a bad sign. It’s also only 114% faster than the CPU in the Thinkpad T420 I bought in 2013. The model I got doesn’t have the fastest possible CPU, but I think that the T420 didn’t either. A 114% increase in CPU speed over 5 years is a long way from the factor of 4 or more that Moore’s law would have predicted.

The keyboard has the stupid positions for the PgUp and PgDn keys I noted on my last review. It’s still annoying and slows me down, but I am starting to get used to it.

The display is FullHD, it’s nice to have a laptop with the same resolution as my phone. It also has a slider to cover the built in camera which MIGHT also cause the microphone to be disconnected. It’s nice that hardware manufacturers are noticing that some customers care about privacy.

The storage is NVMe. That’s a nice feature, although being only 240G may be a problem for some uses.

Conclusion

Definitely a nice laptop if someone else is paying.

The fact that it had cooling issues from the first install is a concern. Laptops have always had problems with cooling and when a laptop has cooling problems before getting any dust inside it’s probably going to perform poorly in a few years.

Lenovo has gone too far trying to make it thin and light. I’d rather have the same laptop but slightly thicker, with a built-in Ethernet port, more USB ports, and a larger battery.

Related posts:

  1. More About the Thinkpad X301 Last month I blogged about the Thinkpad X301 I got...
  2. Thinkpad T420 I’ve owned a Thinkpad T61 since February 2010 [1]. In...
  3. Thinkpad X1 Carbon I just bought a Thinkpad X1 Carbon to replace my...
etbe https://etbe.coker.com.au etbe – Russell Coker

AsioHeaders 1.12.1-1

Mar, 11/09/2018 - 3:21pd

A first update to the AsioHeaders package arrived on CRAN today. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This release is the first following the initial upload of version 1.11.0-1 in 2015. I had noticed the updated 1.12.1 version a few days ago, and then Joe Cheng surprised me with a squeaky clean PR as he needed it to get RStudio’s websocket package working with OpenSSL 1.1.0.

I actually bumbled up the release a little bit this morning, uploading 1.12.1 first and then 1.12.1-1 as we like having a packaging revision. Old habits die hard. So technically CRAN, but we may clean that up and remove the 1.12.1 release from the archive as 1.12.1-1 is identical but for two bytes in DESCRIPTION.

The NEWS entry follow, it really is just the header update done by Joe plus some Travis maintenance.

Changes in version 1.12.1-1 (2018-09-10)
  • Upgraded to Asio 1.12.1 (Joe Cheng in #2)

  • Updated Travis CI support via newer run.sh

Via CRANberries, there is a diffstat report relative to the previous release, as well as this time also one between the version-corrected upload and the main one.

Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Faqet