You are here

Agreguesi i feed

Tobias Bernard: Introducing the CSD Initiative

Planet GNOME - Pre, 26/01/2018 - 4:24md

tl;dr: Let’s get rid of title bars. Join the revolution!

Unless you’re one of a very lucky few, you probably use apps with title bars. In case you’ve never come across that term, title bars are the largely empty bars at the top of some application windows. They contain only the window title and a close button, and are completely separate from the window’s content. This makes them very inflexible, as they can not contain any additional UI elements, or integrate with the application window’s content.

Blender, with its badly integrated and pretty much useless title bar Luckily, the GNOME ecosystem has been moving away from title bars in favor of header bars. This is a newer, more flexible pattern that allows putting window controls and other UI elements in the same bar. Header bars are client-side decorations (CSD), which means they are drawn by the app rather than the display server. This allows for better integration between application and window chrome.

 

GNOME Builder, an app that makes heavy use of the header bar for UI elements

All GNOME apps (except for Terminal) have moved to header bars over the past few years, and so have many third-party apps. However, there are still a few holdouts. Sadly, these include some of the most important productivity apps people rely on every day (e.g. LibreOffice, Inkscape, and Blender).

There are ways to hide title bars on maximized and tiled windows, but these do not (and will never) work on Wayland (Note: I’m talking about GNOME Shell on Wayland here, not other desktops). All window decorations are client-side on Wayland (even when they look like title bars), so there is no way to hide them at a window manager level.

The CSD Initiative

The only way to solve this problem long-term is to patch applications upstream to not use title bars. So this is what we’ll have to do.

That is why I’m hereby announcing the CSD Initiative, an effort to get as many applications as possible to drop title bars in favor of client-side decorations. This won’t be quick or easy, and will require work on many levels. However, with Wayland already being shipped as the default session by some distros, it’s high time we got started on this.

For a glimpse at what this kind of transition will look like in practice, we can look to Firefox and Chromium. Chromium has recently shipped GNOME-style client-side decorations in v63, and Firefox has them in experimental CSD builds. These are great examples for other apps to follow, as they show that apps don’t have to be 100% native GTK in order to use CSD effectively.

Chromium 63 with CSD Chromium 63 with window buttons on the left What is the goal?

This initiative doesn’t aim to make all apps look and act exactly like native GNOME apps. If an app uses GTK, we do of course want it to respect the GNOME HIG. However, it isn’t realistic to assume that apps like Blender or Telegram will ever look like perfectly native GNOME apps. In these cases, we’re are aiming for functional, not visual consistency. For example, it’s fine if an Electron app has custom close/maximize/minimize icons, as long as they use the same metaphors as the native icons.

Thus, our goal is for as many apps as possible to have the following properites:

  • No title bar
  • Native-looking close/maximize/minimize icons
  • Respects the setting for showing/hiding minimize and maximize
  • Respects the setting for buttons to be on the left/right side of the window
Which apps are affected?

Basically, all applications not using GTK3 (and a few that do use GTK3). That includes GTK2, Qt, and Electron apps. There’s a list of some of the most popular affected apps on this initiative’s Wiki page.

The process will be different for each app, and the changes required will range from “can be done in a weekend” to “holy shit we have to redesign the entire app”. For example, GTK3 apps are relatively easy to port to header bars because they can just use the native GTK component. GTK2 apps first need to be ported to GTK3, which is a major undertaking in itself. Some apps will require major redesigns, because removing the title bar goes hand in hand with moving from old-style menu bars to more modern, contextual menus.

Many Electron apps might be low-hanging fruit, because they already use CSD on macOS. This means it should be possible to make this happen on GNU/Linux as well without major changes to the app. However, some underlying work in Electron to expose the necessary settings to apps might be required first.

Slack, like many Electron apps, uses CSD on macOS The same Slack app on Ubuntu (with a title bar)

Apps with custom design languages will have to be evaluated on a case-by-case basis. For example, Telegram’s design should be easy to adapt to a header bar layout. Removing the title bar and adding window buttons in the toolbar would come very close to a native GNOME header bar functionally.

Telegram as it looks currently, with a title bar Telegram mockup with no title bar How can I help?

The first step will be making a list of all the apps affected by this initiative. You can add apps to the list on this Wiki page.

Then we’ll need to do the following things for each app:

  1. Talk to the maintainers and convince them that this is a good idea
  2. Do the design work of adapting the layout
  3. Figure out what is required at a technical level
  4. Implement the new layout and get it merged

In addition, we need to evaluate what we can do at the toolkit level to make it easier to implement CSD (e.g. in Electron or Qt apps). This will require lots of work from lots of people with different skills, and nothing will happen overnight. However, the sooner we start, the sooner we’ll live in an awesome CSD-only future.

And that’s where you come in! Are you a developer who needs help updating your app to a header bar layout? A designer who would like to help redesign apps? A web developer who’d like to help make CSD work seamlessly in Electron apps? Come to #gnome-design on IRC/Matrix and talk to us. We can do this!

Happy hacking!

 

Update:

There have been some misunderstandings about what I meant regarding server-side decorations on Wayland. As far as I know (and take this with a grain of salt), Wayland uses CSD by default, but it is possible to add SSD support via protocol extensions. KDE has proposed such a protocol, and support for this protocol has been contributed to GTK by the Sway developers. However, GNOME Shell does not support it and its developers have stated that they have no plans to support it at the moment.

This is what I was referring to by saying that “it will never work on Wayland”. I can see how this could be misinterpreted from the point of view of other desktop environments but that was not my intention, it was simply unfortunate wording. I have updated the relevant part of this post to clarify.

Also, some people seem to have taken from this that we plan on lobbying for removing title bar support from third-party apps in a way that affects other desktops. The goal of this initiative is for GNOME users to get a better experience by having fewer applications with badly integrated title bars on their systems. That doesn’t preclude applications from having title bars on different desktops, or having a preference for this (like Chromium does, for example).

Ruben Vermeersch: Go: debugging multiple response.WriteHeader calls

Planet GNOME - Pre, 26/01/2018 - 4:11md

Say you’re building a HTTP service in Go and suddenly it starts giving you these:

http: multiple response.WriteHeader calls

Horrible when that happens, right?

It’s not always very easy to figure out why you get them and where they come from. Here’s a hack to help you trace them back to their origin:

type debugLogger struct{} func (d debugLogger) Write(p []byte) (n int, err error) { s := string(p) if strings.Contains(s, "multiple response.WriteHeader") { debug.PrintStack() } return os.Stderr.Write(p) } // Now use the logger with your http.Server: logger := log.New(debugLogger{}, "", 0) server := &http.Server{ Addr: ":3001", Handler: s, ErrorLog: logger, } log.Fatal(server.ListenAndServe())

This will output a nice stack trace whenever it happens. Happy hacking!


Comments | More on rocketeer.be | @rubenv on Twitter

Detecting binary files in the history of a git repository

Planet Debian - Pre, 26/01/2018 - 3:57md
Git, VCSes and binary filesGit is famous and has become popular even in the enterprise/commercial environments. But Git is also infamous regarding storage of large and/or binary files that change often, in spite of the fact they can be efficiently stored. For large files there have been several attempts to fix the issue, with varying degree of success, the most successful being git-lfs and git-annex.

My personal view is that, contrary to many practices, is a bad idea to store binaries in any VCS. Still, this practice has been and still is in use in may projects, especially in closed source projects. I won't go into the reasons, and how legitimate they are, let's say that we might finally convince people that binaries should be removed from the VCS, git, in particular.

Since the purpose of a VCS is to make sure all versions of the stored objects are never lost, Linus designed git in such a way that knowing the exact hash of the tip/head of your git branch, it is guaranteed the whole history of that branch hasn't changed even if the repository was stored in a non-trusted location (I will ignore hash collisions, for practical reasons).

The consequence of this is that if the history is changed one bit, all commit hashes and history after that change will change also. This is what people refer to when they say they rewrite the (git) history, most often, in the context of a rebase.

But did you know that you could use git rebase to traverse the history of a branch and do all sorts of operations such as detecting all binary files that were ever stored in the branch?
Detecting any binary files, only in the current commitAs with everything on *nix, we start with some building blocks, and construct our solution on top of them. Let's first find all files, except the ones in .git:

find . -type f -print | grep -v '^\.\/\.git\/'Then we can use the 'file' utility to list for non-text files:
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text'And if there are any such file, then it means the current git commit is one that needs our attention, otherwise, we're fine.
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK Of course, we assume here, the work tree is clean.
Checking all commits in a branchSince we want to make this an efficient process and we only care if the history contains binaries, and branches are cheap in git, we can use a temporary branch that can be thrown away after our processing is finalized.
Making a new branch for some experiments is also a good idea to avoid losing the history, in case we do some stupid mistakes during our experiment.

Hence, we first create a new branch which points to the exact same tip the branch to be checked points to, and move to it:
git checkout -b test_binsGit has many commands that facilitate automation, and my case I want to basically run the chain of commands on all commits. For this we can put our chain of commands in a script:

cat > ../check_file_text.sh#!/bin/sh

(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
then (ab)use 'git rebase' to execute that for us for all commits:
git rebase --exec="sh ../check_file_text.sh" -i $startcommitAfter we execute this, the editor window will pop up, just save and exit. Assuming $startcommit is the hash of the first commit we know to be clean or beyond which we don't care to search for binaries, this will look in all commits since then.

Here is an example output when checking the newest 5 commits:

$ git rebase --exec="sh ../check_file_text.sh" -i HEAD~5
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Successfully rebased and updated refs/heads/test_bins.

Please note this process can change the history on the test_bins branch, but that is why we used a throw-away branch anyway, right? After we're done, we can go back to another branch and delete the test branch.

$ git co master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master' $ git branch -D test_bins
Deleted branch test_bins (was 6358b91).Enjoy! eddyp noreply@blogger.com Rambling around foo

Christian Schaller: An update on Pipewire – the multimedia revolution

Planet GNOME - Pre, 26/01/2018 - 3:35md

We launched PipeWire last September with this blog entry. I thought it would be interesting for people to hear about the latest progress on what I believe is going to be a gigantic step forward for the Linux desktop. So I caught up with Pipewire creator Wim Taymans during DevConf 2018 in Brno where Wim is doing a talk about Pipewire and we discussed the current state of the code and Wim demonstrated a few of the things that PipeWire now can do.

Christian Schaller and Wim Taymans testing PipeWire with Cheese

Priority number 1: video handling

So as we said when we launched the top priority for PipeWire is to address our needs on the video side of multimedia. This is critical due to the more secure nature of Wayland, which makes the old methods for screen sharing not work anymore and the emergence of desktop containers in the form of Flatpak. Thus we need PipeWire to help us provide appliation and desktop developers with a new method for doing screen sharing and also to provide a secure way for applications inside a container to access audio and video devices on the system.

There are 3 major challenges PipeWire wants to solve for video. One is device sharing, meaning that multiple applications can share the same video hardware device, second it wants to be able to do so in a secure manner, ensuring your video streams are not highjacked by a rogue process and finally it wants to provide an efficient method for sharing of multimedia between applications, like for instance fullscreen capture from your compositor (like GNOME Shell) to your video conferencing application running in your browser like Google Hangouts, Blue Jeans or Pexip.

So the first thing Wim showed me in action was the device sharing. We launched the GNOME photoboot application Cheese which gets PipeWire support for free thanks to the PipeWire GStreamer plugin. And this is an important thing to remember, thanks to so many Linux applications using GStreamer these days we don’t need to port each one of them to PipeWire, instead the PipeWire GStreamer plugin does the ‘porting’ for us. We then launched a gst-launch command line pipeline in a terminal. The result is two applications sharing the same webcam input without one of them blocking access for the other.

As you can see from the screenshot above it worked fine, and this was actually done on my Fedora Workstation 27 system and the only thing we had to do was to start the ‘pipewire’ process in a termal before starting Cheese and the gst-launch pipeline. GStreamer autoplugging took care of the rest. So feel free to try this out yourself if you are interested, but be aware that you will find bugs quickly if you try things like on the fly resolution changes or switching video devices. This is still tech preview level software in Fedora 27.

The plan is for Wim Taymans to sit down with the web browser maintainers at Red Hat early next week and see if we can make progress on supporting PipeWire in Firefox and Chrome, so that conferencing software like the ones mentioned above can start working fully under Wayland.

Since security was one of the drivers for the move to Wayland from X Windows we of course also put a lot of emphasis of not recreating the security holes of X in the compositor. So the way PipeWire now works is that if an application wants to do full screen capture it will check with the compositor through a dbus-api, or a portal in Flatpak and Wayland terminology, and only allows the permited application to do the screen capture, so the stream can’t be highjacked by a random rougue application or process on your computer. This also works from within a sandboxed setting like Flatpaks.

Jack Support

Another important goal of PipeWire was to bring all Linux audio and video together, which means PipeWire needed to be as good or better replacement for Jack for the Pro-Audio usecase. This is a tough usecase to satisfy so while getting the video part has been the top development priority Wim has also worked on verifying that the design allows for the low latency and control needed for Pro-Audio. To do this Wim has implemented the Jack protocol on top of PipeWire.

Carla, a Jack application running on top of PipeWire.


Through that work he has now verified that he is able to achieve the low latency needed for pro-audio with PipeWire and that he will be able to run Jack applications without changes on top of PipeWire. So above you see a screenshot of Carla, a Jack-based application running on top of PipeWire with no Jack server running on the system.

ALSA/Legacy applications

Another item Wim has written the first code for and verfied will work well is the Alsa emulation. The goal of this piece of code is to allow applications using the ALSA userspace API to output to Pipewire without needing special porting or application developer effort. At Red Hat we have many customers with older bespoke applications using this API so it has been of special interest for us to ensure this works just as well as the native ALSA output. It is also worth nothing that Pipewire also does mixing so that sound being routed through ALSA will get seamlessly mixed with audio coming through the Jack layer.

Bluetooth support

The last item Wim has spent some time on since last September is working on making sure Bluetooth output works and he demonstrated this to me while we where talking together during DevConf. The Pipewire bluetooth module plugs directly into the Bluez Bluetooth framework, meaning that things like the GNOME Bluetooth control panel just works with it without any porting work needed. And while the code is still quite young, Wim demonstrated pairing and playing music over bluetooth using it to me.

What about PulseAudio?

So as you probably noticed one thing we didn’t mention above is how to deal with PulseAudio applications. Handling this usecase is still on the todo list and the plan is to at least initially just keep PulseAudio running on the system outputing its sound through PipeWire. That said we are a bit unsure how many appliations would actually be using this path because as mentioned above all GStreamer applications for instance would be PipeWire native automatically through the PipeWire GStreamer plugins. And for legacy applications the PipeWire ALSA layer would replace the current PulseAudio ALSA layer as the default ALSA output, meaning that the only applications left are those outputing to PulseAudio directly themselves. The plan would also be to keep the PulseAudio ALSA device around so if people want to use things like the PulseAudio networked audio functionality they can choose the PA ALSA device manually to be able to keep doing so.
Over time the goal would of course be to not have to keep the PulseAudio daemon around, but dropping it completely is likely to be a multiyear process with current plans, so it is kinda like XWayland on top of Wayland.

Summary

So you might read this and think, hey if all this work we are almost done right? Well unfortunately no, the components mentioned here are good enough for us to verify the design and features, but they still need a lot of maturing and testing before they will be in a state where we can consider switching Fedora Workstation over to using them by default. So there are many warts that needs to be cleaned up still, but a lot of things have become a lot more tangible now than when we last spoke about PipeWire in September. The video handling we hope to enable in Fedora Workstation 28 as mentioned, while the other pieces we will work towards enabling in later releases as the components mature.
Of course the more people interesting in joining the PipeWire community to help us out, the quicker we can mature these different pieces. So if you are interested please join us in #pipewire on irc.freenode.net or just clone the code of github and start hacking. You find the details for irc and git here.

prrd 0.0.2: Many improvements

Planet Debian - Pre, 26/01/2018 - 12:30md

The prrd package was introduced recently, and made it to CRAN shortly thereafter. The idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development and is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the following screenshot (running six parallel workers, arranged in split byobu session).

This note announce the second, and much improved, release. The package now runs on all operating systems supported by R and no longer has external system requirements. Several functions were improved, two new helper functions were added in a so-far still preliminary form, and everything is more robust now.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.2 (2018-01-24)
  • The package no longer require wget.

  • Enhanced sanity checker function.

  • Expanded and improved dequeue function.

  • No longer use $HOME in xvfb-run-safe (#2).

  • The use of xvfb-run use is now conditional on the OS (#3).

  • The set of available packages is no longer constrained to CRAN, but could be via the local setup script (#4).

  • The dequeue() function now uses system2().

  • The enqueue() functions checks if no reverse dependencies are found and stops (#6).

  • The enqueue() functions checks for repository information being set (#5).

CRANberries provides the usual summary of changes to the previous version. See the aforementioned webpage and its repo for details. For more questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

David Tomaschik: Psychological Issues in the Security Industry

Planet Ubuntu - Pre, 26/01/2018 - 9:00pd

I’ve unfortunately had the experience of dealing with a number of psychological issues (either personally or through personal connections) during my tenure in the security fold. I hope to shed some light on them and encourage others to take them seriously.

If you are hoping this post will be some grand reveal of security engineers going psychotic and stabbing users who enter passwords into phishing pages with poor grammar and spelling, web site administrators who can’t be bothered to set up HTTPS, and ransomware authors, then I hate to disappoint you. If, on the other hand, you’re interested in observations of people who have experienced various psychological problems while in the security industry, then I’ll probably still disappoint, just but not as much.

Impostor Syndrome

According to Wikipedia:

Impostor syndrome is a concept describing individuals who are marked by an inability to internalize their accomplishments and a persistent fear of being exposed as a “fraud”. Despite external evidence of their competence, those exhibiting the syndrome remain convinced that they are frauds and do not deserve the success they have achieved.

I know many, many people in this industry who suffer from this and do not have the ability to recognize their own successes. They may only credit themselves with having “helped out” or done the “non-technical” parts. Some do not take career opportunities or refuse to believe their work is interesting to others.

I, myself, sit on the border of impostor syndrome, and it took me years to be convinced that it was only impostor syndrome and that I am not actually incompetent. Even after promotions, performance reviews “exceeding expectations”, and other signs that a rationale individual would take as signs of success, I still believed that I was not doing the right things. I still believe that I’m not as technically strong or effective as any of my coworkers, despite repeated statements by my manager, my skip-level manager, and my director.

I’m not sure if it is possible to “get over” impostor syndrome, but I think most are able to recognize that their self-doubt is a figment of their imagination. It doesn’t necessarily make it easier to swallow, but at a certain point, if you really believe you’re a failure, you will sabotage yourself into being a failure. If you’re concerned about your performance, you don’t have to admit to impostor syndrome, but ask your teammates and manager: am I performing up to your expectations for someone at my level? Am I on track to keep progressing?

Impostor syndrome, though not a diagnosable mental illness, is probably the most common psychological issue faced by those in the security industry. It’s a fast-paced, competitive, field and it’s hard not to compare yourself to others who are more visible and have achieved so much.

Depression

I don’t think I need to define depression. I think it’s important to acknowledge that depression is not the same as “feeling down” or occasionally having a bad day. Most people who have suffered from depression describe it as a feeling that things will never get better or a complete lack of desire to do anything.

Depression doesn’t seem to be quite as widespread as impostor syndrome, but it’s clearly still a big issue. I have known multiple people in the industry who have suffered from depression, and it’s not something that gets “cured” – you just learn how to live with it (and sometimes use medication to help with the worst of it).

Depression is obviously not unique to our field, but I’ve known several people who suffered in silence because of the social aversion/shyness so prevalent. I strongly encourage those who think they might have depression to seek professional help. It’s not an easy thing to deal with, and the consequences can be terrible.

Working as part of a team can help with depression by increasing exposure to other individuals. If you currently work remotely/from home, consider a change that gets you out more and spending time with coworkers or others. It’s clearly not a fix or a panacea, but social interactions can help.

Anxiety

Anxiety is an entire spectrum of issues that members of our field deal with. There are many “introverts” in this industry, so social anxiety is a common issue, especially in situations like conferences or other events with large crowds. I use the term introverts loosely, because it turns out many people who call themselves introverted actually like to be around others and enjoy social interactions, but find them hard to do for reasons of anxiety. Social anxiety and introversion, it turns out, are not the same thing. (I’ve heard that shyness is the bottom end of a spectrum that leads to social anxiety at the upper end.)

Beyond social anxiety, we have generalized anxiety disorder. Given that we work in a field where we spend all day long looking at the way things can fail and the problems that can occur, it’s not surprising that we can tend to have a somewhat negative and anxious view of things. This tends to present with anxiety about a variety of topics, and often also has panic attacks associated.

There are, of course, many other forms of anxiety. I have long had anxiety in the form of so-called “Pure-O” OCD – that is, Obsessive-Compulsive Disorder with only the Obsessive Thoughts and not the Compulsions. This leads to worst-case scenario thinking and an inability to avoid “intrusive” thoughts. It also makes it incredibly hard to manage my work-life balance because I cannot separate my thoughts from my work. I have spent entire weekends unable to do anything because I’ve been thinking about projects I’m dreading or meetings I have the next week. I also tend to obsess about stupid mistakes I make or whether or not I have missed something. At the end of the day, I value certainty and hate the unknowns. (Security is a perfect field for discovering that you don’t handle uncertainty!) At times it can lead to depression as well.

Feeling Overwhelmed (Burn Out)

Obviously this isn’t a diagnosable issue either, but a lot of people in this industry get quite overwhelmed. Burn out is a big problem, and one I’m trying to cope with even as I write this. There’s a number of reasons I see for this:

  1. It’s hard to keep up with this industry. If you just work a 9-5 job in security and spend no time outside that keeping current, I don’t think you’ll have an easy time keeping up.
  2. In many companies, once someone has interacted with you on one project, you’re their permanent “security contact” – and they’ll ask you every question they possibly can.
  3. At least on my team, you’re never able to work on a single thing – I currently have at least a half-dozen projects in parallel. Context switching is very hard for me, and the fastest way to lead to burn out for me.
  4. A lot of being a security professional is not technical, even if that’s the part you love the most. You’ll spend a big part of your time explaining things to product managers, non-security engineers, and others to get your point across.

I wish I had an instant solution for burnout, but if I did, I probably wouldn’t be feeling burnt out. If you have a supportive manager, get them involved early if you’re feeling burnout coming on. I have a great management chain, but I was too “proud” to admit to approaching burnout (because I viewed it as a personal failure) until I was nearly at the point of rage quitting. I still haven’t really fixed it, but I’ve discussed some steps I’ll be taking over the next couple of months to see if I can get myself back to a sane and productive state.

Other Reading Conclusion

I’m hoping this is a helpful tour of some of the mental issues I’ve dealt with in my life, my career, and 5 years in the security industry. It’s not easy, and by no means do I think I hold the answers (if I did, I would probably feel a lot better myself), but I think it’s important to recognize these issues exist. Most of them are not unique to our industry, but I feel that our industry tends to exacerbate them when they exist. I hope that we, as an industry and a community, can work to help those who are suffering or have issues to work past them and become a more successful member of the community.

Bastian Ilsø Hougaard: GNOME at FOSDEM 2018 – with socks and more!

Planet GNOME - Pre, 26/01/2018 - 1:50pd


Sunrise over Hobart seen from Mt Wellington, Tasmania (CC-BY-SA 4.0).

It’s been a while huh? The past six months held me busy traveling and studying abroad in Australia, but I’m back! With renewed energy, and lots and lots of GNOME socks for everyone. Like previous years, I’m helping out in GNOME’s FOSDEM booth at the FOSDEM 2018 conference.


FOSDEM 2016. (CC-BY-SA 4.0)

I have arranged a whopping 420 pairs of GNOME socks produced and hopefully arriving before my departure. baby Socks, ankle socks, regular Socks and even knee socks – maybe I should order an extra suit case to fill up. Even so, I estimate I can probably bring 150 pairs at max (last year my small luggage held 55 pairs..). Because of the large quantity I’ve designed them to be fairly neutral and “simple” (well, actually the pattern is rather complicated).


Sample sock made prior to production.


Breakdown of the horizontally repeatable sock pattern.

I plan to bring them to FOSDEM 2018, Open Source Days in Copenhagen, FOSS North and GUADEC. However, we have also talked about getting some socks shipped to the US or Asia, although a box of 100 socks weigh a lot resulting in expensive shipping. So if anyone is going to any of the aforementioned conferences and can keep some pairs in their luggage, let me know!

Apart from GNOME Booth staffing I am also helping out with organizing small newcomer workshops at FOSDEM! If you are coming to FOSDEM and is interested in mentoring one or two newcomers with your project, let us know on the Newcomer workshop page (more details here too). Most of all, I look forward to meeting fellow GNOME people again as I feel I have been gone quite a long time. I miss you!

Matthew Helmke: Attacking Network Protocols

Planet Ubuntu - Enj, 25/01/2018 - 11:22md

I am always trying to expand the boundaries of my knowledge. While I have a basic understanding of networking and a high-level understanding of security issues, I have never studied or read up on the specifics of packet sniffing or other network traffic security topics. This book changed that.

Attacking Network Protocols: A Hacker’s Guide to Capture, Analysis, and Exploitation takes a network attacker’s perspective while probing topics related to data and system vulnerability over a network. The author, James Forshaw, takes an approach similar to the perspective taken by penetration testers (pen testers), the so-called white hat security people who test a company’s security by trying to break through its defenses. The premise is that if you understand the vulnerabilities and attack vectors, you will be better equipped to protect against them. I agree with that premise.

Most of us in the Free and Open Source software world know about Wireshark and using it to capture network traffic information. This book mentions that tool, but focuses on using a different tool that was written by the author, called CANAPE.Core. Along the way, the author calls out multiple other resources for further study. I like and appreciate that very much! This is a complex topic and even a detailed and technically complex book like this one cannot possibly cover every aspect of the topic in 300 pages. What is covered is clearly expressed, technically deep, and valuable.

The book covers topics ranging from network basics to passive and active traffic capture all the way to the reverse engineering of applications. Along the way Forshaw covers network protocols and their structures, compilers and assemblers, operating system basics, CPU architectures, dissectors, cryptography, and the many causes of vulnerabilities.

Closing the book is an appendix (additional chapter? It isn’t precisely defined, but it is extra content dedicated to a specific topic) that describes a multitude of tools and libraries that the author finds useful, but may not have had an excuse to mention earlier in the book. This provides a set of signposts for the reader to follow for further research and is, again, much appreciated.

While I admit I am a novice in this domain, I found the book helpful, interesting, of sufficient depth to be immediately useful, with enough high-level descriptions and clarification to give me the context and thoughts for further study.

Disclosure: I was given my copy of this book by the publisher as a review copy. See also: Are All Book Reviews Positive?

Adrien Plazas: GTK+ Apps on Phones

Planet GNOME - Enj, 25/01/2018 - 4:42md

As some of you may already know, I recently joined Purism to help developing GTK+ apps for the upcoming Librem 5 phone.

Purism and GNOME share a lot of ideas and values, so the GNOME HIG and GNOME apps are what we will focus on primarily: we will do all we can to not fork nor to reinvent the wheel but to help allowing existing GTK+ applications to work on phones.

How Fit are Existing GTK+ Apps?

Phones are very different from laptops and even tablets: their screen is very small and their main input method is a single thumb on a touchscreen. Luckily, many GNOME applications are touch-friendly and are fit for small screens. Many applications present you a tree of information you can browse and I see two main layouts used by for GNOME applications to let you navigate it.

A first kind of layout is found in applications like Documents, I'll call it stack UI: it uses all the available space to display the collection of information sources (in that case, documents), clicking an element from the collection will focus on it, displaying its content stacked on top of the collection and letting you go back to the collection with a Back button. Applications sporting this layout are the most phone-enabled ones as phone apps typical following a similar layout. Some polish may be needed to make them shine on a phone but overall not so much. Other applications using this layout are Music, Videos, Games, Boxes…

A second kind of layout is found in applications like Contacts, I'll call it panel UI: it displays all the levels of information side-by-side in panels: the closer the information is from the left, the closer it is from the root, each selected node of the information tree being highlighted. This is nice if you have enough window space to display all this information and focus on the leaves isn't required by the user as it allows to quickly jump to other elements of the collection. Unfortunately window space is rare on phones, so these applications would need to be adjusted to fit their screens. Other applications using this layout are Settings, Geary, Polari, FeedReader…

Of course, other layouts exist and are used, but I won't cover these here.

Stack UIs respond to size changes by displaying more or less of the current level of information, but panel UIs tend to seemingly arbitrarily limit the minimum size of the window to keep displaying all the levels information, even though some may not be needed. The responsibility of handling the layout ans sizes to display more or less of some levels of information is sometimes offloaded to the user via the usage of GtkPanel, who then has to manually adjust which information to hide or to display by changing the width of columns every time they need access to another information or when the window's size changes. A notable example of hurtful GtkPanel usage is Geary, which can be a bit of a pain to use half-maximized on a 1920×1080 screen.

Responsive GTK+ Apps

Panel UIs need to be smarter and should decide depending on the window's size which information is relevant to the user and should be displayed. As we don't want to replace the current experience but to extend it, the UIs need to respond to window size changes and explicit focus change requests.

One way of doing it would be to stack the panels one on top of the other to show only one at a time, adding extra Back buttons as needed, effectively switching the UI between panels and a stack.

Another one would be to have floating panels like on KDE Discover. I am not a fan of this method, but on the other hand I'm not a designer.

I expect that to make applications like Geary easier to use even on laptops.

Implementing GtkResponsiveBox

I will try to implement a widget I call GtkResponsiveBox. It contains two children displayed side by side when the box's size is above a given threshold and only one of them when the size is below it.

I expect this widget to look like a weird mix of GtkPaned and GtkStack, to be orientable and to have the following sizes:

  • minimal size = max (widget 1 minimal size, widget 2 minimal size)
  • natural size = widget 1 natural size + widget 2 natural size
  • threshold size = widget 1 minimal size + widget 2 minimal size

I am not completely sure yet how to implement it nor if this widget is a good idea overall. Don't expect anything working soon as it's the first time I subclass GtkContainer. I'll let you know how implementing the widget goes, but in the meantime any comment is welcome!

Thanks Philip for your guide detailing how to implement a custom container, it helps me a lot!

Changes in Prometheus 2.0

Planet Debian - Enj, 25/01/2018 - 1:00pd

This is one part of my coverage of KubeCon Austin 2017. Other articles include:

2017 was a big year for the Prometheus project, as it published its 2.0 release in November. The new release ships numerous bug fixes, new features and, notably, a new storage engine that brings major performance improvements. This comes at the cost of incompatible changes to the storage and configuration-file formats. An overview of Prometheus and its new release was presented to the Kubernetes community in a talk held during KubeCon + CloudNativeCon. This article covers what changed in this new release and what is brewing next in the Prometheus community; it is a companion to this article, which provided a general introduction to monitoring with Prometheus.

What changed

Orchestration systems like Kubernetes regularly replace entire fleets of containers for deployments, which means rapid changes in parameters (or "labels" in Prometheus-talk) like hostnames or IP addresses. This was creating significant performance problems in Prometheus 1.0, which wasn't designed for such changes. To correct this, Prometheus ships a new storage engine that was specifically designed to handle continuously changing labels. This was tested by monitoring a Kubernetes cluster where 50% of the pods would be swapped every 10 minutes; the new design was proven to be much more effective. The new engine boasts a hundred-fold I/O performance improvement, a three-fold improvement in CPU, five-fold in memory usage, and increased space efficiency. This impacts container deployments, but it also means improvements for any configuration as well. Anecdotally, there was no noticeable extra load on the servers where I deployed Prometheus, at least nothing that the previous monitoring tool (Munin) could detect.

Prometheus 2.0 also brings new features like snapshot backups. The project has a longstanding design wart regarding data volatility: backups are deemed to be unnecessary in Prometheus because metrics data is considered disposable. According to Goutham Veeramanchaneni, one of the presenters at KubeCon, "this approach apparently doesn't work for the enterprise". Backups were possible in 1.x, but they involved using filesystem snapshots and stopping the server to get a consistent view of the on-disk storage. This implied downtime, which was unacceptable for certain production deployments. Thanks again to the new storage engine, Prometheus can now perform fast and consistent backups, triggered through the web API.

Another improvement is a fix to the longstanding staleness handling bug where it would take up to five minutes for Prometheus to notice when a target disappeared. In that case, when polling for new values (or "scraping" as it's called in Prometheus jargon) a failure would make Prometheus reuse the older, stale value, which meant that downtime would go undetected for too long and fail to trigger alerts properly. This would also cause problems with double-counting of some metrics when labels vary in the same measurement.

Another limitation related to staleness is that Prometheus wouldn't work well with scrape intervals above two minutes (instead of the default 15 seconds). Unfortunately, that is still not fixed in Prometheus 2.0 as the problem is more complicated than originally thought, which means there's still a hard limit to how slowly you can fetch metrics from targets. This, in turn, means that Prometheus is not well suited for devices that cannot support sub-minute refresh rates, which, to be fair, is rather uncommon. For slower devices or statistics, a solution might be the node exporter "textfile support", which we mentioned in the previous article, and the pushgateway daemon, which allows pushing results from the targets instead of having the collector pull samples from targets.

The migration path

One downside of this new release is that the upgrade path from the previous version is bumpy: since the storage format changed, Prometheus 2.0 cannot use the previous 1.x data files directly. In his presentation, Veeramanchaneni justified this change by saying this was consistent with the project's API stability promises: the major release was the time to "break everything we wanted to break". For those who can't afford to discard historical data, a possible workaround is to replicate the older 1.8 server to a new 2.0 replica, as the network protocols are still compatible. The older server can then be decommissioned when the retention window (which defaults to fifteen days) closes. While there is some work in progress to provide a way to convert 1.8 data storage to 2.0, new deployments should probably use the 2.0 release directly to avoid this peculiar migration pain.

Another key point in the migration guide is a change in the rules-file format. While 1.x used a custom file format, 2.0 uses YAML, matching the other Prometheus configuration files. Thankfully the promtool command handles this migration automatically. The new format also introduces rule groups, which improve control over the rules execution order. In 1.x, alerting rules were run sequentially but, in 2.0, the groups are executed sequentially and each group can have its own interval. This fixes the longstanding race conditions between dependent rules that create inconsistent results when rules would reuse the same queries. The problem should be fixed between groups, but rule authors still need to be careful of that limitation within a rule group.

Remaining limitations and future

As we saw in the introductory article, Prometheus may not be suitable for all workflows because of its limited default dashboards and alerts, but also because of the lack of data-retention policies. There are, however, discussions about variable per-series retention in Prometheus and native down-sampling support in the storage engine, although this is a feature some developers are not really comfortable with. When asked on IRC, Brian Brazil, one of the lead Prometheus developers, stated that "downsampling is a very hard problem, I don't believe it should be handled in Prometheus".

Besides, it is already possible to selectively delete an old series using the new 2.0 API. But Veeramanchaneni warned that this approach "puts extra pressure on Prometheus and unless you know what you are doing, its likely that you'll end up shooting yourself in the foot". A more common approach to native archival facilities is to use recording rules to aggregate samples and collect the results in a second server with a slower sampling rate and different retention policy. And of course, the new release features external storage engines that can better support archival features. Those solutions are obviously not suitable for smaller deployments, which therefore need to make hard choices about discarding older samples or getting more disk space.

As part of the staleness improvements, Brazil also started working on "isolation" (the "I" in the ACID acronym) so that queries wouldn't see "partial scrapes". This hasn't made the cut for the 2.0 release, and is still work in progress, with some performance impacts (about 5% CPU and 10% RAM). This work would also be useful when heavy contention occurs in certain scenarios where Prometheus gets stuck on locking. Some of the performance impact could therefore be offset under heavy load.

Another performance improvement mentioned during the talk is an eventual query-engine rewrite. The current query engine can sometimes cause excessive loads for certain expensive queries, according the Prometheus security guide. The goal would be to optimize the current engine so that those expensive queries wouldn't harm performance.

Finally, another issue I discovered is that 32-bit support is limited in Prometheus 2.0. The Debian package maintainers found that the test suite fails on i386, which lead Debian to remove the package from the i386 architecture. It is currently unclear if this is a bug in Prometheus: indeed, it is strange that Debian tests actually pass in other 32-bit architectures like armel. Brazil, in the bug report, argued that "Prometheus isn't going to be very useful on a 32bit machine". The position of the project is currently that "'if it runs, it runs' but no guarantees or effort beyond that from our side".

I had the privilege to meet the Prometheus team at the conference in Austin and was happy to see different consultants and organizations working together on the project. It reminded me of my golden days in the Drupal community: different companies cooperating on the same project in a harmonious environment. If Prometheus can keep that spirit together, it will be a welcome change from the drama that affected certain monitoring software. This new Prometheus release could light a bright path for the future of monitoring in the free software world.

This article first appeared in the Linux Weekly News.

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

Movit 1.6.0 released

Planet Debian - Enj, 25/01/2018 - 12:49pd

I just released version 1.6.0 of Movit, my GPU-based video filter library.

The full changelog is below, but what's more interesting is maybe what isn't in it, namely the compute shader version of the high-quality resampling filter I blogged about earlier. It turned out that my benchmark setup was wrong in a sort-of subtle way, and unfortunately biased towards the compute shader. Fixing that negated the speed difference—it was actually usually a few percent slower than the fragment shader version, despite a fair amount of earlier tweaks. (It did use less CPU when setting up new parameters, which was nice for things like continuous zooms, but probably not enough to justify the GPU slowdown.)

Which means that after a month or so of testing and performance tuning, I had to scrap it—it's sad to notice so late (I only realized that something was wrong as I started writing up the final documentation, and figured I couldn't actually justify why I would let one of them chain with other effects and the other one not), but it's a sunk cost, and keeping it in based on known-bad benchmarks would have helped nobody. I've left it in a git branch in case the world should change.

I still believe there are useful gains from compute shaders—in particular, the deinterlacer shines—but it's increasingly clear to me that fragment shaders should remain the default go-to tool for graphics on the GPU. (I guess the next natural target would be the convolution/FFT operators, but they're not all that much used.)

The full changelog reads:

Movit 1.6.0, January 24th, 2018 - Support for effects that work as compute shaders. Compute shaders are generally slower than fragment shaders for the same algorithm, but allow some forms of communication between shader invocations and have more flexible output, which can enable more efficient algorithms. See effect.h for more details. Note that the fastest rendering API on EffectChain is now to a texture if possible, not to an FBO. This will only matter if the last effect is a compute shader. - Movit now includes a compute shader implementation of DeinterlaceEffect, which is automatically used instead of the fragment shader implementation if your GPU and OpenGL driver supports it (in practice, this means on all platforms except on macOS). The compute shader version is typically 20–80% faster than the fragment shader version, depending on your GPU and other factors. A compute shader implementation of ResampleEffect was written but ultimately failed to be faster, and so is not included. - Support for microbenchmarks of effects through the Google microbenchmarking framework (optional). Currently, DeinterlaceEffect and ResampleEffect has benchmarks; enable them by running the unit test with --benchmark (also try --benchmark --help). - Effects can now explicitly request _not_ to have mipmaps, which means they can do so without needing to request bounce and fiddling with the sampler state. Note that this is an API change for effects. - Movit now requires C++11, both to build and to #include the header files. Support for SDL1 has been dropped; unit tests and the demo program now need SDL2. - Various smaller bugfixes and optimizations.

Debian packages are on their way up through the NEW queue (there's a soname bump).

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

The Pune Metro 1st anniversary celebrations

Planet Debian - Enj, 25/01/2018 - 12:13pd

This would be long. First and foremost, couple of days ago, I got the following direct message on my twitter handle –

Hi Shirish,

We are glad to inform you that we are celebrating the 1st anniversary of Pune Metro Rail Project & the incorporation of both Nagpur Metro and Pune Metro into Maharashtra Metro Rail Corporation Limited(MahaMetro) on 23rd January at 13:00 hrs followed by the lunch.

On this occasion we would like to invite you to accept a small token of appreciation for your immense support & continued valuable interaction on our social media channels for the project at the hands of Dr. Brijesh Dixit, Managing Director, MahaMetro.

Venue: Hotel Citrus, Opposite PCMC Building, Pimpri-Chinchwad.
Time: 13:00 Hrs
Lunch: 14:00 hrs

Kindly confirm your attendance. Looking forward to meet you.

Regards & Thanks, Pune Metro Team

I went and had an interaction with Mr. Dixit and was gifted a gift card which can be redeemed.

I shared it on facebook. Some people have asked me privately as to what I did.

First of all, let me be very clear. I did not enter into any competition or put up any queries with getting any sort of monetary benefit at all. I have been a user of public transport both out of necessity and choice and do feel the need for a fast, secure, reasonable mode of transport. I am also immensely passionate and curious about public transport as a whole.

Just to share couple of facts and I’m sure most of you will agree with me, it takes more than twice the time if you are taking public transport. at least that’s true in India. Part of it is due to the drivers not keeping the GPS on, nor people/users asking/stressing for using GPS and using that location-based info. to derive when the next bus is gonna come. So, for instance for my journey to PCMC roughly took 20 kms. and about 45 minutes, but waiting took almost twice the time and this was not the rush-hour time where it could easily have taken double the time. Hence people opting for private vehicles even though they know it’s harmful for the environment as well as for themselves and their loved ones.

There was/has been a plan of PMC (Pune Municipal Corporation) for over a decade to use GPS to make aware the traveling public tentatively when the next bus would arrive but due to number of reasons (corruption, lack of training, awareness, discipline) all of which has been hampering citizen’s productivity due to which people are forced to get private vehicles and it becomes a zero-sum game. There is much more but don’t want to go in there.

Now people have asked me what sort of suggestions I gave or am giving –

Yesterday’s interaction after seeing Mahametro’s interaction with the press, it seems the press or media seems to have a very poor understanding of the dynamics and not really interested in enriching citizen’s understanding of either the Pune Metro or the idea of Integrated Transport Initiative which has been in making for sometime now. Part of the issue also seem to lay with Pune Metro in not sharing knowledge as much as they can with the opportunities that digital media/space provides and at very low-cost.

Suggestions and Queries –

1. One of the first things that Pune Metro could make is an animation of how a DPR (Detailed Project Report) is made. I don’t think any of the people from the press, especially English language press has seen the DPR or otherwise many of the questions would have been answered.

http://www.punemetrorail.org/download/PuneMetro-DPR.pdf

The simplest analogy I can give is let’s say you want to build a hospital but the land on which you have to build it belongs to 2-3 parties, so how will you will build it? Also you don’t have money. The DPR is different only in the sense of the scale of the things and construction of the routes is not by a single contractor but by multiple contractors. A route say A – B is divided in 5 parts and asked by people to submit tenders for the packet a company/contractor/s are interested in.

The way I see it, the DPR has to figure out the right of way where construction of the spans have to be, where the stations have to be built, from where electricity and water has to come, where the maintainance depot will be (usually the depot is at the end), the casting yard for the spans/pillars etc.

There is a pre-qualification round so that only eligible bidders are interested who have history of doing similar scale work and then bidding as to who can do it at lowest cost with a set reserve price. If no bidder comes up either due to from contractor’s POV a very high reserve price, then the reserve price is lowered. The idea there is simply to have a price discovery which may be seen as being found out by a just and fair method.

The press seemed to be more interested in making a tiff between Pune Metro/Maha Metro chief and Gaurdian Minister Shri Girish Bapat of something which to my mind is a non-issue at this juncture.

Mr. Dixit was absolutely correct in saying that he can’t comment on when the extension to Nigdi will happen unless the DPR for extension to Nigdi is made, land is found and the various cost heads, expenses and funding is approved in the State and Central Government and funding from multiple places is done.

The other question which was being raised by the press was razing of the BRTS in Pune. While the press knew it was neither Mr. Dixit’s place or responsibility nor is he supposed to comment upon whatever happens to BRTS. He can’t even comment as that would come under Pune Urban Transport Ministry.

As far as I understand Mr. Dixit’s obligations, it is to build Pune Metro as safely, as quickly, using good materials, give good signages and give an efficient public transit service that we Puneties can be proud of.

2. The site http://www.punemetrorail.org/ really needs an update and upgrade. You should use something like wordpress or something where you are able to change themes every now and then. Every 3-6 months the themes should be tweaked so the content remains or at least looks fresh.

3. There are no time-stamps of any of the videos. At the very least should have time-stamps so some sort of contextual information is available.

4. There is no way to know if there is news. News should be highlighted and more information be shared. For instance, there should have been more info. e.g. about this particular item –

MoU signed between Dr. Brijesh Dixit, MD MahaMetro, Mrs. Prerna Deshbhratar, Addl Municipal Commissioner(Spl), PMC and Mr Kong Wy Mun, CEO, Singapore Cooperation Enterprise for the Urban Management(Multi-Modal Transport) during a program at Yashada, Pune.

from http://www.punemetrorail.org/projectupdate.aspx

It would have been interesting to know what it means and how would the Singapore Government help us in achieving a unified multi-modal transport model.

There were/are tons of questions that the press could have asked but didn’t as above and more below.

5. The DPR was made in November 2015 and then now it is 2018. There probably needs to be vis-a-vis adjusted prices taking into consideration changes over 3 years and would probably change till 2021.

6. Similarly, there are time-gaps between plans and execution of the plan and for Puneties we don’t even know what the plan is.

I would urge Pune Metro to have a dynamic plan which shows areas in which work is progressing in terms of blinking lights to know which are active areas and which are not. They could be a source of inspiration and trail-blazer on this.

7. Similarly, another idea which could be done or might even be done is to have a single photograph taken everyday at say 1200 hrs. in afternoon at all the areas at 640×480 resolution which can be uploaded to the site and in turn could be published onto a separate web-page which in-turn over days and weeks could be turn into a time-lapse video similar to what was achieved for the first viaduct video shot over a day or two –

If you want to make it more interesting and challenging, you could invite students from Symbiosis to make it on something like a Raspberry Pi2/3 or some other SBC (Single Board Computer), a camera lens, a solar cell and a modem with instructions to stagger images to send it to Pune metro rail portal in case some web traffic is already there. There could be specific port (not port 80) .

Later on make a time-lapse video would be simple as stitching all those photographs together and getting some nice audio music as fillers. Something which has already been done once for the viaduct video as seen above.

8. Tracking planned versus real-time progress – While Mr. Dixit has time and again ensured that things are progressing well, it would make it far much easier to trust if there was a web-service which tells if things are going according to schedule or is it bit off. It does overlap a bit with my earlier suggestion but there are many development projects around the wold which show tentative and actual progress.

9. Apart from traffic diversion news in major newspapers, it would be nice to also have a section about traffic diversion with blinkers or something about road diversions which are in effect.

10. Another would be to have a RSS feed about all news found out by various search-engine crawlers, delete duplicate links and share the news and views for people to click-through and know for themselves.

11. Statistics of jobs (both direct and indirect created) due to Pune Metro works displayed prominently.

12. Have a glossary of terms which can easily be garnered by having a 10th-12th average student going through say the DPR and see which terms he has problems with.

The simplest example is the word ‘Reach’ which has been used in a different context in Pune Metro than what is usually understood.

13. Are there and if there are, How many Indian SME’s have been entrusted either via joint-venture or whichever way to ensure knowledge transfer of making and maintaining the rakes, car/bogies, track etc.

14. If any performance and load guarantee has been asked from various packet holders. If yes, what are the guarantees and for what duration ?

These are all low-hanging fruits. Also I’m no web-developer although am a bit of content producer (as can be seen) and a voracious consumer of the web. I do have few friends though, if there is requirement and who understand the medium in a far more better, intimate way than the crudish manner I shared above.

A student who believes democracy needs work and needs efforts to democracy work. If citizens themselves would not ask these questions who would ?

shirishag75 https://flossexperiences.wordpress.com #planet-debian – Experiences in the community

Michael Catanzaro: Announcing Epiphany Technology Preview

Planet GNOME - Enj, 25/01/2018 - 12:09pd

If you use macOS, the best way to use a recent development snapshot of WebKit is surely Safari Technology Preview. But until now, there’s been no good way to do so on Linux, short of running a development distribution like Fedora Rawhide.

Enter Epiphany Technology Preview. This is a nightly build of Epiphany, on top of the latest development release of WebKitGTK+, running on the GNOME master Flatpak runtime. The target audience is anyone who wants to assist with Epiphany development by testing the latest code and reporting bugs, so I’ve added the download link to Epiphany’s development page.

Since it uses Flatpak, there are no host dependencies asides from Flatpak itself, so it should work on any system that can run Flatpak. Thanks to the Flatpak sandbox, it’s far more secure than the version of Epiphany provided by your operating system. And of course, you enjoy automatic updates from GNOME Software or any software center that supports Flatpak.

Enjoy!

(P.S. If you want to use the latest stable version instead, with all the benefits provided by Flatpak, get that here.)

Ismael Olea: I'm going to FOSDEM 2018

Planet GNOME - Mër, 24/01/2018 - 8:51md

Yeah. I finally decided I’m going to FOSDEM this year. 2018 is the year I’m re-taken my life as I like it and a right way to start it is meeting all those friends and colleagues I missed in those years of exile. I plan to attend to the beer event as soon I arrive to Brussels.

If you want to talk to me about GUADEC 2018, Fedora Flock 2018 or whatever please reach me by Twitter (@olea) or Telegram (@IsmaelOlea).

BTW, there are a couple relevant Telegram Groups FOSDEM related:

General English Telegram group: https://t.me/fosdem

Spanish spoken one: https://t.me/fosdem_ES

PS:A funny thing about FOSDEM is… this is the place when the Spaniards (or Madrileños indeed) opensource entusiasts can meet at once a year… in Brussels!

Michael Meeks: 2018-01-24 Wednesday

Planet GNOME - Mër, 24/01/2018 - 6:37md
  • Poked mail, chat with Miklos, poked at partner mail and tasks. Sync with Noel & Eloy, customer call, out to see the dentist.
  • Pleased to discover from Pranav that although Linux' skype client inexplicably (and globally) eats the ctrl-alt-shift-D key-combination (without doing anything with it seemingly) - I can still select an online window and do map._docLayer.toggleTileDebugMode().

Ideas for the project architecture and short term goals

Planet Debian - Mër, 24/01/2018 - 5:49md

There has been many discussions about planning for the FOSS calendar. On this post, I report about some of the ideas.

How I first thought the Foss Events calendar

Back in december, when I was just making sense of my surroundings and trying to find a way to start the internship, I drawed this diagram to picture in my head how everything would work:

  1. There would be a "crawler.py" module, which would access each site on a determined list (It could be Facebook, Meetup or any other site such as another calendar) that have events information. This module would pull the event data from those sites.

  2. A validator.py would check if the data was good and if there was data. Once this module verified this, it would dump all info into a dirty_events database.

  3. The dirty_events database would be accessed by the module parser.py, which would clean and organize the data to be properly stored in the events database.

  4. An API.py module would query the events database and return the proper data, formatted into JSON, ical and/or some other formats.

  5. There would be an user interface to get data from API.py and to display this data. It should also be possible to add (properly formatted) events to the database using this interface. [If we were talking about a plugin to merely display the events in MoinMoin or Wordpress or some other system, this plugin would enter in this category.]

The ideas that Daniel put on paper

Then, when I shared with my mentors, Daniel came up with this:

Daniel proposed that module or plugins could be developed or improved (there are some of them already, but they might not support iCalendar URLs) for MoinMoin, Drupal, Wordpress that would allow the data each of these systems have about events to be aggregated. Information from the Meetup and the Facebook APIs could be converted to ical to be agreggated. This aggregation process could happen through a cron job - and I believe daily is enough, because people don't usually publish an event to happen in the very next day (they need time for people to acknowledge it). If the time frame ends up not being the ideal, this can be reviewed and changed later.

Once all this data is gathered, it would then be stored, inserting it or updating it in what could be a PostgreSQL or NoSQL solution.

Using the database with events information, it should be possible to do a data dump with all the information or to give "reports" of the event data, whether the user wants to access the data in iCalendar format (for Thunderbird or GNOME Evolution) or just HTML for viewing in the browser.

Short term goals

Creating a FOSS events calendar it is a big project that will most certainly continue beyond my Outreachy internship.

Therefore, along with my mentors, we have established that my short term goal will be to contribute a bit to it by working on the MoinMoin EventCalendar so the events can be exported to the iCalendar format.

I have been studying and playing around with the EventCalendar code and, so far, I've concluded that the best way to do this might be by writing a function to it. Just like there are other functions on this plugin to change the display of the calendar, there might be a function to just sort the data to the iCalendar format and to allow downloading the file.

Renata https://rsip22.github.io/blog/ Renata's blog

Kubuntu General News: Plasma 5.12 LTS beta available in PPA for testing on Artful & Bionic

Planet Ubuntu - Mër, 24/01/2018 - 4:10md

Adventurous users, testers and developers running Artful 17.10 or our development release Bionic 18.04 can now test the beta version of Plasma 5.12 LTS.

An upgrade to the required Frameworks 5.42 is also provided.

As with previous betas, this is experimental and is only suggested for people who are prepared for possible bugs and breakages.

In addition, please be prepared to use ppa-purge to revert changes, should the need arise at some point.

Read more about the beta release.

If you want to test then:

sudo add-apt-repository ppa:kubuntu-ppa/beta

and then update packages with

sudo apt update
sudo apt full-upgrade

A Wayland session can be made available at the SDDM login screen by installing the package plasma-workspace-wayland. Please note the information on Wayland sessions in the KDE announcement.

Note: Due to Launchpad builder downtime and maintenance due to Meltdown/Spectre fixes, limiting us to amd64/i386 architectures, these builds may be superseded with a rebuild once the builders are back to normal availability.

The primary purpose of this PPA is to assist testing for bugs and quality of the upcoming final Plasma 5.12 LTS release, due for release by KDE on 6th Febuary.

It is anticipated that Kubuntu Bionic Beaver 18.04 LTS will ship with Plasma 5.12.4, the latest point release of 5.12 LTS available at release date.

Bug reports on the beta itself should be reported to bugs.kde.org.

Packaging bugs can be reported as normal to: Kubuntu PPA bugs: https://bugs.launchpad.net/kubuntu-ppa

Should any issues occur, please provide feedback on our mailing lists [1] or IRC [2]

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net

Stephen Michael Kellat: Damage Report

Planet Ubuntu - Mër, 24/01/2018 - 4:24pd

In no particular order:

  • There was a "partial government shutdown" of the federal government of the United States of America. As a federal civil servant, I rated an "essential-excepted" designation this time which required working without pay until the end of the crisis. Fortunately this did not change my tour of duty. Deroy Murdock has a good write-up of the sordid affair Not all my co-workers at my bureau in the department were rated "essential-excepted" so I had to staff a specific appointment hotline to reschedule taxpayer appointments as no specific outreach was made to tell any taxpayers that the in-person offices were closed.
  • The federal government of the United States of America remains without full-year appropriations for Fiscal Year 2018. Appropriations are set to lapse again on February 8, 2018. I've talked to family about apportioning costs of the household bills and have told them that even though I am working now nothing is guaranteed. Donations are always accepted via PayPal although they are totally not tax-deductible. I'm open to considering proposed transitions from the federal civil service and the data on LinkedIn is probably a good starting point if anybody wants to talk.
  • Sporadic power outages are starting to erupt in Ashtabula. A sudden outage happened on Monday during my Regular Day Off that stressed the many UPS units littered around the house. Multiple outages happened today that also stressed the UPS units. Nothing too unusual has been going on other than snow has been melting.
  • Finances are balanced on a knife edge. Who ever said a government job was a life of luxury?

Positives:

  • I haven't broken any computers recently
  • I haven't run any cell phones or other electronics through the washer/dryer combo
  • My limited work with the church in mission outreach to one of the local nursing homes is still going on
  • I own my home

It isn't all bad. Tallying up the damage lately has just taken a bit of energy. There has been a lot of bad stuff going on.

Nuritzi Sanchez: Meet Shobha Tyagi from GNOME.Asia Summit 2016

Planet GNOME - Mër, 24/01/2018 - 2:09pd

This month’s community spotlight is on Shobha Tyagi, one of the volunteer organizers of GNOME.Asia Summit 2016.

Courtesy of Shobha Tyagi

Shobha’s history with GNOME began when she participated in the Outreach Program for Women (OPW) internship in December 2013, with GNOME as her mentoring organization. She attended her first GUADEC in 2014 while she was an OPW intern, and met Emily Chen, who introduced her to the GNOME.Asia Summit.

Passionate about helping to spread GNOME throughout Asia, Shobha was resolute to rise to the challenge of bringing GNOME.Asia Summit to her home in Delhi, India. Fast-forward two years, Shobha is proudly leading the local organizing team of GNOME.Asia, which is ready to lift its curtain in Delhi, on April 21, 2016.

We chatted with Shobha about GNOME and her experience organizing GNOME.Asia.

Why did you choose to work with GNOME for your OPW internship? To be honest, I thought that since GNOME organizes OPW, I would receive the most productive mentoring from GNOME. Sure enough, that happened! I decided to make my initial contribution to Documentation, and after that I met my guru and mentor, Ekaterina Gerasimova. Courtesy of Shobha Tyagi

Do you have a favorite thing about GNOME?

My favorite thing about GNOME is its people. The same people who create it, maintain it, and use it – they are what makes GNOME really great. I really enjoy committing my patches directly to the upstream repositories and meeting the contributors in person. I also get great satisfaction whenever I tell people about GNOME and let them know how they can also contribute. You submitted the winning bid to host GNOME.Asia Summit 2016; do you have any tips for those who are interested to bid for upcoming GNOME conferences? Sure! It does help if you have attended a GNOME conference in the past, but once you have made up your mind to bid, have faith in yourself and just write your proposal. Can you describe a challenge you faced while organizing the GNOME.Asia Summit and how you overcame it? There are many challenges, especially when you are the only one who knows the ins and outs of the event and have a limited amount of time. I’m surrounded by very supportive people. Even so, people expect more from the person who lays the initial groundwork. I thank the summit committee members for their tremendous help and persistence through countless IRC meetings and discussion, without which, it would have been impossible to overcome all of the small obstacles throughout the entire planning experience. What’s the most exciting part about being an organizer? The most exciting part is learning new things! Writing sponsorship documents, calling for presentations, picking up basic web development skills, identifying keynote speakers, chief guests and sponsors, amongst other things. I learned first-hand what goes into designing logos, posters, and stickers. There were also other tasks that I wouldn’t have had to do in a normal situation like arranging a day tour to Taj Mahal for a big group. Life after GNOME.Asia Summit Delhi; what is going to be your next project? After the GNOME.Asia Summit, I would like to focus my efforts on establishing a GNOME user group in Delhi. Advice for eager newcomers and first-time contributors? My advice for them is to come and join GNOME! GNOME enables you and me to contribute, and when we contribute, we help each other improve our lives. If you are committed, you can commit patches too. And now, some fun questions. What is your favorite color?  Yellow. Favorite food? All vegetarian Indian food. What is your spirit animal? Cow! They have a calm demeanor, and symbolize abundance and fertility since they represents both earth and sky. Finally, and this one is important; what do you think cats dream about? Cats dream about being loved, cared for and pampered by their master.

Shobha is helping to organize the 2016 GNOME.Asia Summit while working as an Assistant Professor at Manav Rachna International University, and pursuing a doctorate in Software Engineering. She has been a Foundation member since 2014, and has previously contributed to the Documentation team.

Thank you so much, Shobha, for sparing some of your time to talk to us! We wish you a successful Summit!

Interviewed by Adelia Rahim. 

Nuritzi Sanchez: Giving Spotlight | Meet Øyvind Kolås, GEGL maintainer extraordinaire

Planet GNOME - Mër, 24/01/2018 - 2:09pd

Last month, we had the pleasure of interviewing Øyvind Kolås, aka “pippin,” about his work on GEGL — a fundamental technology enabling GIMP and GNOME Photos.

GIMP Stickers, CC-BY-SA Michael Natterer

This interview is part of a “Giving Spotlight” series we are doing on some long-time GNOME contributors who have fundraising campaigns. The goal is to help GNOME users understand the importance of the technologies, get to know the maintainers, and learn how to support them.

Without further ado, we invite you to get to know Øyvind and his work on GEGL!

The following interview was conducted over email. 

Getting to know Øyvind

Where are you from and where are you based?

I’m from the town of Ørsta – in the end of a fjord in Norway, but since high-school I’ve been quite migratory. Studying fine art in Oslo and Spain, color science research at a color lab and lecturing multimedia CD-ROM authoring in south-eastern Norway, and working on GNOME technologies like Clutter and cairo for Opened Hand and Intel in London, followed by half a decade of low-budget backpacking. At the moment I am based in Norway – and try to keep in touch with a few people and places – among others England, Germany, and Spain.

Øyvind “pippin” Kolås, CC BY-NC-ND Ross Burton

What do you do and how does it relate to GNOME?

I like tinkering with code – frequently code that involves graphics or UI. This results in sometimes useful, at other times odd, but perhaps interesting, tools, infrastructure, or other software artifacts. Through the years and combined with other interests, this has resulted in contributions to cairo and Clutter, as well as being the maintainer of babl and GEGL, which provide pixel handling and processing machinery for GIMP 2.9, 2.10 and beyond.

How did you first get involved in GNOME?

I attended the second GUADEC which happend in Copenhagen in 2001. This was my first in-person meeting with the people behind nicknames in #gimp as well as meeting in-person GIMP developers and power users, and the wider community around it including the GNOME project.

Why is your fundraising campaign important?

I want GIMP to improve and continue being relevant in the future, as well as
having a powerful graph-based framework for other imaging tasks. I hope that my continued maintainership of babl/GEGL will enable many new and significant workflows in GIMP and related software, as well as provide a foundation for implementing and distributing experimental image processing filters.

Wilber Week 2017, a hackathon for GEGL and GIMP, CC-BY-SA Debarshi Ray Getting to know GEGL

How did your project originate?

GEGL’s history starts in 1997 with Rythm and Hues studios, a Californian visual effects and animation company. They were experimenting with a 16bit/high bit depth fork of GIMP known as filmgimp/cinepaint. Rythm and Hues succeeded in making GIMP work on high bit depth images, but the internal architecture was found to be lacking – and they started GEGL as a more solid future basis for high bit depth non-destructive editing in GIMP. Their funding/management interest waned, and GEGL development went dormant. GIMP however continued considering GEGL to be its future core.

How did you start working on GEGL?

I’ve been making and using graphics-related software since the early ’90s. In 2003-2004 I made a video editor for my own use in a hobby collaboration music video/short film venture. This video editing project was discontinued and salvaged for spare parts, like babl and a large set of initial operations when I took up maintainership and development of GEGL.

What are some of the greatest challenges that you’ve faced along the way?

When people get to know that I am somehow involved in development of the GIMP project, they expect me to be in control of and responsible for how the UI currently is. I have removed some GIMP menu items in attempts to clean things up and reduce technical debt, but most improvements I can take credit for now, and in the future, are indirect, like how moving to GEGL enables higher bit depths and on-canvas preview instead of using postage stamp-sized previews in dialogs.

What are some of your greatest successes?

Bringing GEGL from a duke-nukem-forever state, where GIMP was waiting on GEGL for all future enhancements, to GEGL waiting for GIMP to adopt it. The current development series of GIMP (2.9.x) is close to be released as 2.10 which will be the new stable; it is a featureful version with feature parity with 2.8 but a new engine under the hood. I am looking forward to seeing where GIMP will take GEGL in the future.

What are you working on right now?

One of the things I am working on – and playing with – at the moment is experiments in color separation. I’m making algorithms that simulate the color mixing behavior of inks and paints. That might be useful in programs like GIMP for tasks ranging from soft-proofing spot-colors to preparing photos or designs for multi-color silk-screening, for instance for textiles.

Which projects depend on your project? What’s the impact so far?

There are GIMP and GNOME Photos, as well as imgflo, which is a visual front-end provided by the visual programming environment noflo. GEGL (and babl, a companion library), are designed to be generally useful and do not have any APIs that could only be considered useful for GIMP. GEGL itself also contains various example and experimental command line and graphical interfaces for image and video processing.

How can I get involved? 

GEGL continues needing, and thankfully getting, contributions, new filters, fixes to old filters, improvements to infrastructure, improved translations, and documentation. Making more projects use GEGL is also a good way of attracting more contributors. With funds raised through Liberapay and Patreon, I find it easier to allocate time and energy towards making the contribution experience of others smoother.

And now a few questions just for fun…

What is your favorite place on Earth?

Tricky, I have traveled a lot and not found a single place that is a definitive favorite. Places I’ve found to be to my liking are near the equator and have little seasonal variation, as well as are sufficiently high altitude to cool down to a comfortable day high temperature of roughly 25 degrees Celsius.

Favorite ice cream?

Could I have two scoops in a waffle cone, one mango sorbet, one coconut please?

Finally, our classic question: what do you think cats dream about?

Some cats probably dream about being able to sneak through walls.

Øyvind Kolås, CC BY-NC-ND Ross Burton

 

 

Thank you, Øyvind, for your answers. We look forward to seeing your upcoming work on GEGL this year and beyond!

Please consider supporting Øyvind through his GEGL Liberapay or GEGL Patreon campaigns. 

Faqet

Subscribe to AlbLinux agreguesi