You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 2 ditë 3 orë më parë

David Mohammed: Ubuntu Budgie 24.04 Release Notes

Hën, 15/04/2024 - 11:02md

Ubuntu Budgie 24.04 LTS (Noble Numbat) is a Long Term Support release with 3 years of support by your distro maintainers, from April 2024 to May 2027. These release notes showcase the key takeaways for 22.04 upgraders to 24.04. In these release notes the areas covered are: Quarter & half tiling is pretty much self-explaining. Dragging a window to the…

Source

Paul Tagliamonte: Domo Arigato, Mr. debugfs

Sht, 13/04/2024 - 3:27md

Years ago, at what I think I remember was DebConf 15, I hacked for a while on debhelper to write build-ids to debian binary control files, so that the build-id (more specifically, the ELF note .note.gnu.build-id) wound up in the Debian apt archive metadata. I’ve always thought this was super cool, and seeing as how Michael Stapelberg blogged some great pointers around the ecosystem, including the fancy new debuginfod service, and the find-dbgsym-packages helper, which uses these same headers, I don’t think I’m the only one.

At work I’ve been using a lot of rust, specifically, async rust using tokio. To try and work on my style, and to dig deeper into the how and why of the decisions made in these frameworks, I’ve decided to hack up a project that I’ve wanted to do ever since 2015 – write a debug filesystem. Let’s get to it.

Back to the Future It shouldn't shock anyone to learn I'm a huge fan of Go, right?

Time to admit something. I really love Plan 9. It’s just so good. So many ideas from Plan 9 are just so prescient, and everything just feels right. Not just right like, feels good – like, correct. The bit that I’ve always liked the most is 9p, the network protocol for serving a filesystem over a network. This leads to all sorts of fun programs, like the Plan 9 ftp client being a 9p server – you mount the ftp server and access files like any other files. It’s kinda like if fuse were more fully a part of how the operating system worked, but fuse is all running client-side. With 9p there’s a single client, and different servers that you can connect to, which may be backed by a hard drive, remote resources over something like SFTP, FTP, HTTP or even purely synthetic.

I even triggered a weird bug in vim when writing a 9p filesystem that wound up impacting WSL -- although it seems like maybe not due to 9p (rather, SMB)

The interesting (maybe sad?) part here is that 9p wound up outliving Plan 9 in terms of adoption – 9p is in all sorts of places folks don’t usually expect. For instance, the Windows Subsystem for Linux uses the 9p protocol to share files between Windows and Linux. ChromeOS uses it to share files with Crostini, and qemu uses 9p (virtio-p9) to share files between guest and host. If you’re noticing a pattern here, you’d be right; for some reason 9p is the go-to protocol to exchange files between hypervisor and guest. Why? I have no idea, except maybe due to being designed well, simple to implement, and it’s a lot easier to validate the data being shared and validate security boundaries. Simplicity has its value.

As a result, there’s a lot of lingering 9p support kicking around. Turns out Linux can even handle mounting 9p filesystems out of the box. This means that I can deploy a filesystem to my LAN or my localhost by running a process on top of a computer that needs nothing special, and mount it over the network on an unmodified machine – unlike fuse, where you’d need client-specific software to run in order to mount the directory. For instance, let’s mount a 9p filesystem running on my localhost machine, serving requests on 127.0.0.1:564 (tcp) that goes by the name “mountpointname” to /mnt.

Unfortunately, this requires root to mount and feels very un-plan9, but it does work and the protocol is good. $ mount -t 9p \ -o trans=tcp,port=564,version=9p2000.u,aname=mountpointname \ 127.0.0.1 \ /mnt

Linux will mount away, and attach to the filesystem as the root user, and by default, attach to that mountpoint again for each local user that attempts to use it. Nifty, right? I think so. The server is able to keep track of per-user access and authorization along with the host OS.

WHEREIN I STYX WITH IT "Simple" here is intended as my highest form of praise. Writing complex things is easy. Taking your work, and simplifying it down the core is the most difficult part of our work.

Since I wanted to push myself a bit more with rust and tokio specifically, I opted to implement the whole stack myself, without third party libraries on the critical path where I could avoid it. The 9p protocol (sometimes called Styx, the original name for it) is incredibly simple. It’s a series of client to server requests, which receive a server to client response. These are, respectively, “T” messages, which transmit a request to the server, which trigger an “R” message in response (Reply messages). These messages are TLV payload with a very straight forward structure – so straight forward, in fact, that I was able to implement a working server off nothing more than a handful of man pages.

There's also a 9P2000.L 9p variant which has more Linux specific extensions. There's a good chance I port this forward when I get the chance.

Later on after the basics worked, I found a more complete spec page that contains more information about the unix specific variant that I opted to use (9P2000.u rather than 9P2000) due to the level of Linux specific support for the 9P2000.u variant over the 9P2000 protocol.

MR ROBOTO It really bothers me rust libraries that deal with I/O need to support std::io, but to add support for async runtimes, you need to implement support for tokio::io and every other runtime; but them's the breaks I guess. I really miss Go's built-in async support and io module.

The backend stack over at zoo is rust and tokio running i/o for an HTTP and WebRTC server. I figured I’d pick something fairly similar to write my filesystem with, since 9P can be implemented on basically anything with I/O. That means tokio tcp server bits, which construct and use a 9p server, which has an idiomatic Rusty API that partially abstracts the raw R and T messages, but not so much as to cause issues with hiding implementation possibilities. At each abstraction level, there’s an escape hatch – allowing someone to implement any of the layers if required. I called this framework arigato which can be found over on docs.rs and crates.io.

/// Simplified version of the arigato File trait; this isn't actually /// the same trait; there's some small cosmetic differences. The /// actual trait can be found at: /// /// https://docs.rs/arigato/latest/arigato/server/trait.File.html trait File { /// OpenFile is the type returned by this File via an Open call. type OpenFile: OpenFile; /// Return the 9p Qid for this file. A file is the same if the Qid is /// the same. A Qid contains information about the mode of the file, /// version of the file, and a unique 64 bit identifier. fn qid(&self) -> Qid; /// Construct the 9p Stat struct with metadata about a file. async fn stat(&self) -> FileResult<Stat>; /// Attempt to update the file metadata. async fn wstat(&mut self, s: &Stat) -> FileResult<()>; /// Traverse the filesystem tree. async fn walk(&self, path: &[&str]) -> FileResult<(Option<Self>, Vec<Self>)>; /// Request that a file's reference be removed from the file tree. async fn unlink(&mut self) -> FileResult<()>; /// Create a file at a specific location in the file tree. async fn create( &mut self, name: &str, perm: u16, ty: FileType, mode: OpenMode, extension: &str, ) -> FileResult<Self>; /// Open the File, returning a handle to the open file, which handles /// file i/o. This is split into a second type since it is genuinely /// unrelated -- and the fact that a file is Open or Closed can be /// handled by the `arigato` server for us. async fn open(&mut self, mode: OpenMode) -> FileResult<Self::OpenFile>; } /// Simplified version of the arigato OpenFile trait; this isn't actually /// the same trait; there's some small cosmetic differences. The /// actual trait can be found at: /// /// https://docs.rs/arigato/latest/arigato/server/trait.OpenFile.html trait OpenFile { /// iounit to report for this file. The iounit reported is used for Read /// or Write operations to signal, if non-zero, the maximum size that is /// guaranteed to be transferred atomically. fn iounit(&self) -> u32; /// Read some number of bytes up to `buf.len()` from the provided /// `offset` of the underlying file. The number of bytes read is /// returned. async fn read_at( &mut self, buf: &mut [u8], offset: u64, ) -> FileResult<u32>; /// Write some number of bytes up to `buf.len()` from the provided /// `offset` of the underlying file. The number of bytes written /// is returned. fn write_at( &mut self, buf: &mut [u8], offset: u64, ) -> FileResult<u32>; } Thanks, decade ago paultag! If this isn't my record for longest idea-to-wip-project time, it's close.

Let’s do it! Let’s use arigato to implement a 9p filesystem we’ll call debugfs that will serve all the debug files shipped according to the Packages metadata from the apt archive. We’ll fetch the Packages file and construct a filesystem based on the reported Build-Id entries. For those who don’t know much about how an apt repo works, here’s the 2-second crash course on what we’re doing. The first is to fetch the Packages file, which is specific to a binary architecture (such as amd64, arm64 or riscv64). That architecture is specific to a component (such as main, contrib or non-free). That component is specific to a suite, such as stable, unstable or any of its aliases (bullseye, bookworm, etc). Let’s take a look at the Packages.xz file for the unstable-debug suite, main component, for all amd64 binaries.

$ curl \ https://deb.debian.org/debian-debug/dists/unstable-debug/main/binary-amd64/Packages.xz \ | unxz

This will return the Debian-style rfc2822-like headers, which is an export of the metadata contained inside each .deb file which apt (or other tools that can use the apt repo format) use to fetch information about debs. Let’s take a look at the debug headers for the netlabel-tools package in unstable – which is a package named netlabel-tools-dbgsym in unstable-debug.

Package: netlabel-tools-dbgsym Source: netlabel-tools (0.30.0-1) Version: 0.30.0-1+b1 Installed-Size: 79 Maintainer: Paul Tagliamonte <paultag@debian.org> Architecture: amd64 Depends: netlabel-tools (= 0.30.0-1+b1) Description: debug symbols for netlabel-tools Auto-Built-Package: debug-symbols Build-Ids: e59f81f6573dadd5d95a6e4474d9388ab2777e2a Description-md5: a0e587a0cf730c88a4010f78562e6db7 Section: debug Priority: optional Filename: pool/main/n/netlabel-tools/netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb Size: 62776 SHA256: 0e9bdb087617f0350995a84fb9aa84541bc4df45c6cd717f2157aa83711d0c60

So here, we can parse the package headers in the Packages.xz file, and store, for each Build-Id, the Filename where we can fetch the .deb at. Each .deb contains a number of files – but we’re only really interested in the files inside the .deb located at or under /usr/lib/debug/.build-id/, which you can find in debugfs under rfc822.rs. It’s crude, and very single-purpose, but I’m feeling a bit lazy.

Who needs dpkg?! Hilariously, the fourth? fifth? non-serious time (second serious time) I've had to do this for a new language.

For folks who haven’t seen it yet, a .deb file is a special type of .ar file, that contains (usually) three files inside – debian-binary, control.tar.xz and data.tar.xz. The core of an .ar file is a fixed size (60 byte) entry header, followed by the specified size number of bytes.

[8 byte .ar file magic] [60 byte entry header] [N bytes of data] [60 byte entry header] [N bytes of data] [60 byte entry header] [N bytes of data] ... I can't believe it's already been over a decade since my NM process, and nearly 16 years since I became an Ubuntu member.

First up was to implement a basic ar parser in ar.rs. Before we get into using it to parse a deb, as a quick diversion, let’s break apart a .deb file by hand – something that is a bit of a rite of passage (or at least it used to be? I’m getting old) during the Debian nm (new member) process, to take a look at where exactly the .debug file lives inside the .deb file.

$ ar x netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb $ ls control.tar.xz debian-binary data.tar.xz netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb $ tar --list -f data.tar.xz | grep '.debug$' ./usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug

Since we know quite a bit about the structure of a .deb file, and I had to implement support from scratch anyway, I opted to implement a (very!) basic debfile parser using HTTP Range requests. HTTP Range requests, if supported by the server (denoted by a accept-ranges: bytes HTTP header in response to an HTTP HEAD request to that file) means that we can add a header such as range: bytes=8-68 to specifically request that the returned GET body be the byte range provided (in the above case, the bytes starting from byte offset 8 until byte offset 68). This means we can fetch just the ar file entry from the .deb file until we get to the file inside the .deb we are interested in (in our case, the data.tar.xz file) – at which point we can request the body of that file with a final range request. I wound up writing a struct to handle a read_at-style API surface in hrange.rs, which we can pair with ar.rs above and start to find our data in the .deb remotely without downloading and unpacking the .deb at all.

I really like HTTP Range requests a lot. I did some stats to figure out what compression dbgsym packages use these days; my LAN debug mirror contains 113459 xz compressed tarfiles, and 9 gzip compressed tarfiles at the time of writing.

After we have the body of the data.tar.xz coming back through the HTTP response, we get to pipe it through an xz decompressor (this kinda sucked in Rust, since a tokio AsyncRead is not the same as an http Body response is not the same as std::io::Read, is not the same as an async (or sync) Iterator is not the same as what the xz2 crate expects; leading me to read blocks of data to a buffer and stuff them through the decoder by looping over the buffer for each lzma2 packet in a loop), and tarfile parser (similarly troublesome). From there we get to iterate over all entries in the tarfile, stopping when we reach our file of interest. Since we can’t seek, but gdb needs to, we’ll pull it out of the stream into a Cursor<Vec<u8>> in-memory and pass a handle to it back to the user.

From here on out its a matter of gluing together a File traited struct in debugfs, and serving the filesystem over TCP using arigato. Done deal!

A quick diversion about compression

I was originally hoping to avoid transferring the whole tar file over the network (and therefore also reading the whole debug file into ram, which objectively sucks), but quickly hit issues with figuring out a way around seeking around an xz file. What’s interesting is xz has a great primitive to solve this specific problem (specifically, use a block size that allows you to seek to the block as close to your desired seek position just before it, only discarding at most block size - 1 bytes), but data.tar.xz files generated by dpkg appear to have a single mega-huge block for the whole file. I don’t know why I would have expected any different, in retrospect. That means that this now devolves into the base case of “How do I seek around an lzma2 compressed data stream”; which is a lot more complex of a question.

After going through a lot of this, I realized just how complex the xz format is -- it's a lot more than just lzma2!

Thankfully, notoriously brilliant tianon was nice enough to introduce me to Jon Johnson who did something super similar – adapted a technique to seek inside a compressed gzip file, which lets his service oci.dag.dev seek through Docker container images super fast based on some prior work such as soci-snapshotter, gztool, and zran.c. He also pulled this party trick off for apk based distros over at apk.dag.dev, which seems apropos. Jon was nice enough to publish a lot of his work on this specifically in a central place under the name “targz” on his GitHub, which has been a ton of fun to read through.

The gist is that, by dumping the decompressor’s state (window of previous bytes, in-memory data derived from the last N-1 bytes) at specific “checkpoints” along with the compressed data stream offset in bytes and decompressed offset in bytes, one can seek to that checkpoint in the compressed stream and pick up where you left off – creating a similar “block” mechanism against the wishes of gzip. It means you’d need to do an O(n) run over the file, but every request after that will be sped up according to the number of checkpoints you’ve taken.

Given the complexity of xz and lzma2, I don’t think this is possible for me at the moment – especially given most of the files I’ll be requesting will not be loaded from again – especially when I can “just” cache the debug header by Build-Id. I want to implement this (because I’m generally curious and Jon has a way of getting someone excited about compression schemes, which is not a sentence I thought I’d ever say out loud), but for now I’m going to move on without this optimization. Such a shame, since it kills a lot of the work that went into seeking around the .deb file in the first place, given the debian-binary and control.tar.gz members are so small.

The Good

First, the good news right? It works! That’s pretty cool. I’m positive my younger self would be amused and happy to see this working; as is current day paultag. Let’s take debugfs out for a spin! First, we need to mount the filesystem. It even works on an entirely unmodified, stock Debian box on my LAN, which is huge. Let’s take it for a spin:

$ mount \ -t 9p \ -o trans=tcp,version=9p2000.u,aname=unstable-debug \ 192.168.0.2 \ /usr/lib/debug/.build-id/

And, let’s prove to ourselves that this actually mounted before we go trying to use it:

$ mount | grep build-id 192.168.0.2 on /usr/lib/debug/.build-id type 9p (rw,relatime,aname=unstable-debug,access=user,trans=tcp,version=9p2000.u,port=564)

Slick. We’ve got an open connection to the server, where our host will keep a connection alive as root, attached to the filesystem provided in aname. Let’s take a look at it.

$ ls /usr/lib/debug/.build-id/ 00 0d 1a 27 34 41 4e 5b 68 75 82 8E 9b a8 b5 c2 CE db e7 f3 01 0e 1b 28 35 42 4f 5c 69 76 83 8f 9c a9 b6 c3 cf dc E7 f4 02 0f 1c 29 36 43 50 5d 6a 77 84 90 9d aa b7 c4 d0 dd e8 f5 03 10 1d 2a 37 44 51 5e 6b 78 85 91 9e ab b8 c5 d1 de e9 f6 04 11 1e 2b 38 45 52 5f 6c 79 86 92 9f ac b9 c6 d2 df ea f7 05 12 1f 2c 39 46 53 60 6d 7a 87 93 a0 ad ba c7 d3 e0 eb f8 06 13 20 2d 3a 47 54 61 6e 7b 88 94 a1 ae bb c8 d4 e1 ec f9 07 14 21 2e 3b 48 55 62 6f 7c 89 95 a2 af bc c9 d5 e2 ed fa 08 15 22 2f 3c 49 56 63 70 7d 8a 96 a3 b0 bd ca d6 e3 ee fb 09 16 23 30 3d 4a 57 64 71 7e 8b 97 a4 b1 be cb d7 e4 ef fc 0a 17 24 31 3e 4b 58 65 72 7f 8c 98 a5 b2 bf cc d8 E4 f0 fd 0b 18 25 32 3f 4c 59 66 73 80 8d 99 a6 b3 c0 cd d9 e5 f1 fe 0c 19 26 33 40 4d 5a 67 74 81 8e 9a a7 b4 c1 ce da e6 f2 ff

Outstanding. Let’s try using gdb to debug a binary that was provided by the Debian archive, and see if it’ll load the ELF by build-id from the right .deb in the unstable-debug suite:

$ gdb -q /usr/sbin/netlabelctl Reading symbols from /usr/sbin/netlabelctl... Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug... (gdb)

Yes! Yes it will!

$ file /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter *empty*, BuildID[sha1]=e59f81f6573dadd5d95a6e4474d9388ab2777e2a, for GNU/Linux 3.2.0, with debug_info, not stripped The Bad

Linux’s support for 9p is mainline, which is great, but it’s not robust. Network issues or server restarts will wedge the mountpoint (Linux can’t reconnect when the tcp connection breaks), and things that work fine on local filesystems get translated in a way that causes a lot of network chatter – for instance, just due to the way the syscalls are translated, doing an ls, will result in a stat call for each file in the directory, even though linux had just got a stat entry for every file while it was resolving directory names. On top of that, Linux will serialize all I/O with the server, so there’s no concurrent requests for file information, writes, or reads pending at the same time to the server; and read and write throughput will degrade as latency increases due to increasing round-trip time, even though there are offsets included in the read and write calls. It works well enough, but is frustrating to run up against, since there’s not a lot you can do server-side to help with this beyond implementing the 9P2000.L variant (which, maybe is worth it).

The Ugly

Unfortunately, we don’t know the file size(s) until we’ve actually opened the underlying tar file and found the correct member, so for most files, we don’t know the real size to report when getting a stat. We can’t parse the tarfiles for every stat call, since that’d make ls even slower (bummer). Only hiccup is that when I report a filesize of zero, gdb throws a bit of a fit; let’s try with a size of 0 to start:

$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug -r--r--r-- 1 root root 0 Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug $ gdb -q /usr/sbin/netlabelctl Reading symbols from /usr/sbin/netlabelctl... Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug... warning: Discarding section .note.gnu.build-id which has a section size (24) larger than the file size [in module /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug] [...]

This obviously won’t work since gdb will throw away all our hard work because of stat’s output, and neither will loading the real size of the underlying file. That only leaves us with hardcoding a file size and hope nothing else breaks significantly as a result. Let’s try it again:

$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug -r--r--r-- 1 root root 954M Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug $ gdb -q /usr/sbin/netlabelctl Reading symbols from /usr/sbin/netlabelctl... Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug... (gdb)

Much better. I mean, terrible but better. Better for now, anyway.

Kilroy was here

Do I think this is a particularly good idea? I mean; kinda. I’m probably going to make some fun 9p arigato-based filesystems for use around my LAN, but I don’t think I’ll be moving to use debugfs until I can figure out how to ensure the connection is more resilient to changing networks, server restarts and fixes on i/o performance. I think it was a useful exercise and is a pretty great hack, but I don’t think this’ll be shipping anywhere anytime soon.

Along with me publishing this post, I’ve pushed up all my repos; so you should be able to play along at home! There’s a lot more work to be done on arigato; but it does handshake and successfully export a working 9P2000.u filesystem. Check it out on on my github at arigato, debugfs and also on crates.io and docs.rs.

At least I can say I was here and I got it working after all these years.

Scarlett Gately Moore: Kubuntu: Noble Numbat Beta available! Qt6 snaps coming soon.

Pre, 12/04/2024 - 9:29md

It has been a very busy couple of weeks as we worked against some major transitions and a security fix that required a rebuild of the $world. I am happy to report that against all odds we have a beta release! You can read all about it here: https://kubuntu.org/news/kubuntu-24-04-beta-released/ Post beta freeze I have already begun pushing our fixes for known issues today. A big one being our new branding! Very exciting times in the Kubuntu world.

In the snap world I will be using my free time to start knocking out KDE applications ( not covered by the project ). I have also recruited some help, so you should start seeing these pop up in the edge channel very soon!

Now that we are nearing the release of Noble Numbat, my contract is coming to an end with Kubuntu. If you would like to see Plasma 6 in the next release and in a PPA for Noble, please consider donating to extend my contract at https://kubuntu.org/donate !

On a personal level, I am still looking to help with my grandson and you can find that here: https://www.gofundme.com/f/in-loving-memory-of-william-billy-dean-scalf

Thanks for stopping by,

Scarlett

Ubuntu Studio: Ubuntu Studio 24.04 LTS Beta Released

Pre, 12/04/2024 - 2:40pd

The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 24.04 LTS, codenamed “Noble Numbat”.

While this beta is reasonably free of any showstopper installer bugs, you will find some bugs within. This image is, however, mostly representative of what you will find when Ubuntu Studio 24.04 is released on April 25, 2024.

Special Notes

The Ubuntu Studio 24.04 LTS disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.

Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/24.04/beta/

Full updated information, including Upgrade Instructions, are available in the Release Notes.

Please note that upgrading before the release of 24.04.1, due August 2024, is unsupported.

New Features This Release
  • PipeWire continues to improve with every release and is so robust it can be used for professional and prosumer use. Version 1.0.4
  • Ubuntu Studio Installer‘s included Ubuntu Studio Audio Configuration utility for fine-tuning the PipeWire setup or changing the configuration altogether now includes the ability to create or remove a dummy audio device. Version 1.9
Major Package Upgrades
  • Ardour version 8.4.0
  • Qtractor version 0.9.39
  • OBS Studio version 30.0.2
  • Audacity version 3.4.2
  • digiKam version 8.2.0
  • Kdenlive version 23.08.5
  • Krita version 5.2.2

There are many other improvements, too numerous to list here. We encourage you to look around the freely-downloadable ISO image.

Known Issues
  • Ubuntu Studio’s classic PulseAudio-JACK configuration cannot be used on Ubuntu Desktop (GNOME) due to a known issue with the ubuntu-desktop metapackage. (LP: #2033440)
  • We now discourage the use of the aforementioned classic PulseAudio-JACK configuration as PulseAudio is becoming deprecated with time in favor of PipeWire. PipeWire’s JACK configuration can be disabled to use JACK2 via QJackCTL for advanced users.
  • Due to the Ubuntu repositories being in-flux following the time_t transition and xz-utils security issue resolution, some items in the repository are uninstallable or causing other packaging conflicts. The Ubuntu Release Team is working around the clock to help resolve these issues, so patience is required.

Official Ubuntu Studio release notes can be found at https://ubuntustudio.org/ubuntu-studio-24-04-LTS-release-notes/

Further known issues, mostly pertaining to the desktop environment, can be found at https://wiki.ubuntu.com/NobleNumbat/ReleaseNotes/Kubuntu

Additionally, the main Ubuntu release notes contain more generic issues: https://discourse.ubuntu.com/t/noble-numbat-release-notes/39890

How You Can Help

Please test using the test cases on https://iso.qa.ubuntu.com. All you need is a Launchpad account to get started.

Additionally, we need financial contributions. Our project lead, Erich Eickmeyer, is working long hours on this project and trying to generate a part-time income. See this post as to the reasons why and go here to see how you can contribute financially (options are also in the sidebar).

Frequently Asked Questions

Q: Does Ubuntu Studio contain snaps?
A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.

Thunderbird has become a snap this cycle in order for the maintainers to get security patches delivered faster.

Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.

Q: If I install this Beta release, will I have to reinstall when the final release comes out?
A: No. If you keep it updated, your installation will automatically become the final release. However, if Audacity returns to the Ubuntu repositories before final release, then you might end-up with a double-installation of Audacity. Removal instructions of one or the other will be made available in a future post.

Q: Will you make an ISO with {my favorite desktop environment}?
A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.

Q: What if I don’t want all these packages installed on my machine?
A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need!

Lubuntu Blog: Lubuntu Noble Beta Released!

Enj, 11/04/2024 - 11:04md
We are happy to announce the Beta release for Lubuntu Noble (what will become 24.04 LTS)! What makes this cycle unique? Lubuntu is a lightweight flavor of Ubuntu, based on LXQt and built for you. As an official flavor, we benefit from Canonical’s infrastructure and assistance, in addition to the support and enthusiasm from the […]

Lukas Märdian: Netplan v1.0 paves the way to stable, declarative network management

Enj, 04/04/2024 - 5:39md

New “netplan status –diff” subcommand, finding differences between configuration and system state

As the maintainer and lead developer for Netplan, I’m proud to announce the general availability of Netplan v1.0 after more than 7 years of development efforts. Over the years, we’ve so far had about 80 individual contributors from around the globe. This includes many contributions from our Netplan core-team at Canonical, but also from other big corporations such as Microsoft or Deutsche Telekom. Those contributions, along with the many we receive from our community of individual contributors, solidify Netplan as a healthy and trusted open source project. In an effort to make Netplan even more dependable, we started shipping upstream patch releases, such as 0.106.1 and 0.107.1, which make it easier to integrate fixes into our users’ custom workflows.

With the release of version 1.0 we primarily focused on stability. However, being a major version upgrade, it allowed us to drop some long-standing legacy code from the libnetplan1 library. Removing this technical debt increases the maintainability of Netplan’s codebase going forward. The upcoming Ubuntu 24.04 LTS and Debian 13 releases will ship Netplan v1.0 to millions of users worldwide.

Highlights of version 1.0

In addition to stability and maintainability improvements, it’s worth looking at some of the new features that were included in the latest release:

  • Simultaneous WPA2 & WPA3 support.
  • Introduction of a stable libnetplan1 API.
  • Mellanox VF-LAG support for high performance SR-IOV networking.
  • New hairpin and port-mac-learning settings, useful for VXLAN tunnels with FRRouting.
  • New netplan status –diff subcommand, finding differences between configuration and system state.

Besides those highlights of the v1.0 release, I’d also like to shed some light on new functionality that was integrated within the past two years for those upgrading from the previous Ubuntu 22.04 LTS which used Netplan v0.104:

  • We added support for the management of new network interface types, such as veth, dummy, VXLAN, VRF or InfiniBand (IPoIB). 
  • Wireless functionality was improved by integrating Netplan with NetworkManager on desktop systems, adding support for WPA3 and adding the notion of a regulatory-domain, to choose proper frequencies for specific regions. 
  • To improve maintainability, we moved to Meson as Netplan’s buildsystem, added upstream CI coverage for multiple Linux distributions and integrations (such as Debian testing, NetworkManager, snapd or cloud-init), checks for ABI compatibility, and automatic memory leak detection. 
  • We increased consistency between the supported backend renderers (systemd-networkd and NetworkManager), by matching physical network interfaces on permanent MAC address, when the match.macaddress setting is being used, and added new hardware offloading functionality for high performance networking, such as Single-Root IO Virtualisation virtual function link-aggregation (SR-IOV VF-LAG).

The much improved Netplan documentation, that is now hosted on “Read the Docs”, and new command line subcommands, such as netplan status, make Netplan a well vested tool for declarative network management and troubleshooting.

Integrations

Those changes pave the way to integrate Netplan in 3rd party projects, such as system installers or cloud deployment methods. By shipping the new python3-netplan Python bindings to libnetplan, it is now easier than ever to access Netplan functionality and network validation from other projects. We are proud that the Debian Cloud Team chose Netplan to be the default network management tool in their official cloud-images for Debian Bookworm and beyond. Ubuntu’s NetworkManager package now uses Netplan as it’s default backend on Ubuntu 23.10 Desktop systems and beyond. Further integrations happened with cloud-init and the Calamares installer.

Please check out the Netplan version 1.0 release on GitHub! If you want to learn more, follow our activities on Netplan.io, GitHub, Launchpad, IRC or our Netplan Developer Diaries blog on discourse.

Dougie Richardson: Update Plesk Docker Images

Dje, 31/03/2024 - 3:35md

Docker > Settings > Overview > Recreate, making sure that “Rest variable to default” is not checked.

Finally start.

Simos Xenitellis: How to install and setup the Incus Web UI

Enj, 28/03/2024 - 11:16md

Incus is a manager for virtual machines (VM) and system containers. There is also an Incus support forum.

Typically you would use the incus command-line interface (CLI) client to get access to the Incus manager and perform the tasks for the full life-cycle of the virtual machines and system containers.

In this post we see how to install and setup the Incus Web UI. Just like the incus CLI tool that gets access to the REST API of the Incus manager (through a Unix socket or HTTPS), the Incus Web UI does the same over HTTPS. I assume that you have already installed and setup Incus.

Table of Contents Prerequisites

You should already have a installation of Incus. If you do not have yet, see the official documentation on Incus installation and Incus migration, or my prior posts on Incus installation and Incus migration.

Installing the Incus Web UI package

The Incus Web UI package is incus-ui-canonical. We install it. By installing the package, we can enable Incus to serve the necessary Web pages (from /opt/incus/ui) so that we can connect with our browser and manage Incus itself.

sudo apt install -y incus-ui-canonical Preparing Incus to serve the Web UI

By default Incus is not listening to a Web port so that we can access directly through the browser. We need to enable first Incus to activate access to the Web browser. By default there is no configuration with incus config show.

debian@myincus:~$ incus config show config: {} debian@myincus:~$

We activate the Incus Web server, selecting the port number 8443. You are free to select another one, if you need to. We set core.https_address to :8443. This information appears in the incus config output.

debian@myincus:~$ incus config set core.https_address :8443 debian@myincus:~$ incus config show config: core.https_address: :8443 debian@myincus:~$

Let’s verify that Incus is now listening to port 8443. Yes, it does. On all interfaces (because of the *).

debian@myincus:~$ sudo apt install -y lsof ... debian@myincus:~$ sudo lsof -i :8443 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME incusd 8338 root 8u IPv6 29751 0t0 TCP *:8443 (LISTEN) debian@myincus:~$

This is HTTPS, where are the certificate and the server key (private key)?

debian@myincus:~$ sudo ls -l /var/lib/incus/server.key /var/lib/incus/server.crt -rw-r--r-- 1 root root 753 Mar 28 18:54 /var/lib/incus/server.crt -rw------- 1 root root 288 Mar 28 18:54 /var/lib/incus/server.key debian@myincus:~$ sudo openssl x509 -in /var/lib/incus/server.crt -text -noout Certificate: Data: Version: 3 (0x2) Serial Number: 22:05:f1:14:f2:82:43:68:44:5e:1c:42:4c:28:5b:5c Signature Algorithm: ecdsa-with-SHA384 Issuer: O = Linux Containers, CN = root@myincus Validity Not Before: Mar 28 18:54:17 2024 GMT Not After : Mar 26 18:54:17 2034 GMT Subject: O = Linux Containers, CN = root@myincus Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:fb:cd:b6:b2:25:55:68:a5:33:75:48:4c:b0:7a: 2f:e9:c0:16:af:6f:b2:36:f9:19:6e:b0:86:bf:d1: 9f:07:16:b1:26:8b:75:36:f2:fc:02:38:c7:fa:25: 39:01:6c:bb:48:a9:4f:57:0d:af:e1:0f:a3:cf:b1: 7c:a2:d9:46:77:e7:94:c7:00:1a:d0:5f:5f:93:d8: 11:39:8d:16:0e:d0:62:98:81:93:da:ec:b8:70:24: f2:c4:da:91:0f:f8:8e ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Alternative Name: DNS:myincus, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1 Signature Algorithm: ecdsa-with-SHA384 Signature Value: 30:64:02:30:15:f4:fa:7b:d6:52:79:d4:c9:27:b9:d6:6c:90: f7:0e:13:83:15:ac:af:cd:c5:f2:48:08:99:7f:7b:94:55:06: 81:95:80:5f:0a:21:17:82:61:ac:5a:b6:5f:b8:49:b3:02:30: 62:a3:92:66:da:ce:7c:01:49:7e:38:16:c6:16:b3:cb:aa:3d: 1d:3f:63:12:93:e8:a1:0b:55:f0:80:99:d5:80:8a:a3:a6:2e: 3d:68:90:a6:dc:55:29:0b:36:80:36:72 debian@myincus:~$

Note that this is a self-signed certificate. Chrome, Firefox and other browsers will complain; you can still accept to continue but it will show a broken padlock at the address bar. If you wish, you can replace these with proper certificates so that the padlock is intact. To do so, once you replace the server key and the server certificate with actual values, restart Incus. If, however, you are running an Incus cluster, you must use lxc cluster update-certificate instead to update them. Note that a common alternative to dealing with Incus certificates, is to use a reverse-proxy; you get the reverse-proxy to use a proper certificate and leave Incus as is.

At this point Incus is configured. We can continue with the next step where we get the client (our browser) to be authenticated to the server.

Getting the browser to authenticate to the server

Visit the URL of your Incus server with your browser. At first you will likely confronted with a message that the server certificate is not accepted (Warning: Potential Security Risk Ahead). Click to Accept and continue. Then, you are presented with the following screen that asks you to login. You are authenticated to the Incus server through user certificates. You are prompted here to do just that. Your browser will create

  1. a user certificate to be installed into Incus (incus-ui.crt)
  2. the same user certificate with a private key that will be setup in your browser(s) (incus-ui.pfx).

Click on Create a new certificate.

Creating a new certificate.

Now click on Generate to get your browser to generate the private key and the certificate.

You are asked whether you want to protect the certificate with a password. In our case we click on Skip because we do not want to encrypt the private key with a password. By clicking on Skip, the private key is still generated but it is not getting encrypted.

At this point the browser generated incus-ui.crt, which is the user certificate to install in Incus. In the following we added the user certificate to Incus.

debian@myincus:~$ incus config trust list +------+------+-------------+-------------+-------------+ | NAME | TYPE | DESCRIPTION | FINGERPRINT | EXPIRY DATE | +------+------+-------------+-------------+-------------+ debian@myincus:~$ incus config trust add-certificate incus-ui.crt debian@myincus:~$ incus config trust list +--------------+--------+-------------+--------------+----------------------+ | NAME | TYPE | DESCRIPTION | FINGERPRINT | EXPIRY DATE | +--------------+--------+-------------+--------------+----------------------+ | incus-ui.crt | client | | b89b80eb4c89 | 2026/12/23 21:08 UTC | +--------------+--------+-------------+--------------+----------------------+ debian@myincus:~$ The two files have been generated. We are adding incus-ui.crt to Incus, and incus-ui.pfx to the Web browser.

The page above has instructions on how to add the user certificate to Firefox, Chrome, Edge and macOS. For example, for the case of Firefox, type the following to the address bar and press Enter. Alternatively, go to Settings→Privacy & Security→Certificates. There, click on View Certificates… and select the Your Certificates tab. Finally, click to Import… the incus-ui.pfx certificate file.

about:preferences#privacy This is found in Firefox under SettingsPrivacy & SecurityCertificates.

When you add the incus-ui.pfx user certificate in Firefox, it will appear as in the following screenshot.

The incus-ui.pfx certificate has been added to this instance of Firefox.

Subsequently, switch back to the Firefox tab with the Incus UI page and you are shown the following prompt to get your browser to send the user certificate to the Incus manager in order to get authenticated, and be able to manage Incus through the Web. Click on OK.

You are prompted to identify yourself to Incus UI in order to be able to manage the Incus installation.

Finally, you are able to manage Incus over the Web with Incus UI. The Web page loads up and you can perform all tasks that you can do with the incus command-line client.

Your browser is now authenticated through your user certificate and you can manage Incus over the Web with Incus UI. Using the Incus UI

We click on Create Instance to create a first instance. We select from the list which image to use, then click to Create and start.

Creating an instance and starting it.

While the instance is created, you are updated with the different steps that take place. In the end, the instance is successfully launched.

The instance has been created and is running. Conclusion

With Incus UI you are able to go through all the workflow of managing Incus instances through your Web browser. Incus UI has been implemented as a stateless Web application, which means that no information are stored on the browser. For example, the browser does not maintain a database with the created instances; the state is maintained on Incus.

In this post we saw how to setup Incus UI with SSL/TLS authentication. It’s also possible to setup Incus UI to use Single Sign-On (SSO). Here is a tutorial on how to setup Incus UI with Open-ID Connect (OIDC).

There are a few more UI Web applications for Incus, including lxops. At some point in the future I expect to cover them as well.

Tips and Tricks How to make the Incus port accessible to localhost only

The address has the format of <ip address>:<port>. You can specify localhost (127.0.0.1) for the part of the IP address. By doing so, Incus will only bind to localhost and listen to local connections only.

debian@myincus:~$ incus config show config: core.https_address: :8443 debian@myincus:~$ incus config set core.https_address 127.0.0.1:8443 debian@myincus:~$ incus config show config: core.https_address: 127.0.0.1:8443 debian@myincus:~$ sudo lsof -i :8443 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME incusd 8338 root 8u IPv4 30315 0t0 TCP localhost:8443 (LISTEN) debian@myincus:~$ What’s in incus-ui.crt and incus-ui.pfx?

You can use openssl to decode both files. This is an RSA 2048-bit certificate using the SHA-1 hash function.

$ openssl x509 -in incus-ui.crt -noout -text Certificate: Data: Version: 3 (0x2) Serial Number: 01:12:00:11:07:65:00:03:00:10:00:41:00:04:09:11 Signature Algorithm: sha1WithRSAEncryption Issuer: C = AU, ST = Some-State, O = Incus UI 10.10.10.98 (Browser Generated) Validity Not Before: Mar 28 21:08:58 2024 GMT Not After : Dec 23 21:08:58 2026 GMT Subject: C = AU, ST = Some-State, O = Incus UI 10.10.10.98 (Browser Generated) Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:ce:f8:1d:67:e1:a3:f5:1a:16:b6:26:63:8f:32: 42:99:0d:af:86:8b:18:49:1a:4b:8e:ab:68:e1:04: ba:24:dd:e6:27:d5:df:7a:13:cf:16:b3:33:28:89: e0:ab:c8:dc:c1:2a:0a:de:ed:26:3a:77:74:dd:42: 1c:e2:22:fc:a5:a5:68:c1:c9:3b:4d:12:15:27:ae: c6:50:ec:dc:f1:0a:ba:00:0c:83:d0:0d:0f:81:90: 4e:30:43:cb:45:bf:e2:e9:17:39:40:3b:95:8b:8b: 18:e9:59:51:fc:9a:7a:80:e4:73:b3:54:bd:ff:1c: 7c:81:75:16:e3:6f:3a:56:9b:0f:a3:73:55:45:03: d8:fb:f3:34:4c:60:4f:f2:67:9f:66:ea:29:29:78: 6c:66:05:d6:7d:96:cd:0f:2b:4b:9c:71:2c:09:6f: e2:b4:23:d0:5d:d0:fe:b0:6a:b1:58:5e:d7:b5:47: 9e:aa:47:34:f8:7d:e1:ed:fe:bf:97:3d:99:49:42: af:e2:e5:b3:c5:1e:58:b1:98:01:db:8f:25:9f:f8: d9:03:02:06:f9:99:0a:3a:a1:70:9d:fe:64:0d:c2: d8:cc:f0:1c:53:e4:31:4c:78:12:c2:fd:72:23:6a: f4:7e:41:f9:d5:df:6b:ad:2c:52:29:d0:7f:eb:65: 64:0f Exponent: 65537 (0x10001) Signature Algorithm: sha1WithRSAEncryption Signature Value: 28:b3:5c:48:64:8c:23:82:dd:e2:05:6a:9d:18:dd:43:f4:07: e6:be:1e:80:b7:f9:0c:0f:3d:cd:b8:bd:7b:55:7e:36:6d:74: 24:d5:69:b2:24:51:3a:2d:c5:95:68:b5:dc:27:d5:83:d9:bc: cb:d0:fd:55:24:63:7d:c6:65:9b:f1:b3:9d:f7:b4:4e:ba:83: eb:bf:f5:d0:f6:95:2d:7b:90:4e:d3:89:ac:f0:87:e6:fa:9d: f6:ea:c2:42:f2:15:17:74:5c:e4:3c:ed:1a:42:3c:e7:04:aa: 65:42:3e:75:5c:24:8e:52:85:0d:4b:b2:e2:ec:fa:57:4a:68: 35:4b:8f:3c:13:fc:15:09:80:5a:b1:c8:e0:22:f5:69:25:4b: 46:8b:e0:b9:e1:3a:f5:0c:40:d2:c3:75:9c:79:9a:aa:68:9b: 21:36:ed:67:cb:6d:fc:bc:f0:0b:5a:2b:1a:4c:73:67:c5:79: b6:27:b9:58:d0:c7:ea:84:21:bf:f4:7c:44:11:d7:88:ab:1d: e4:53:c9:10:cd:e6:b8:5a:7a:92:73:a8:1e:fe:1c:2e:dc:e8: 7e:3d:e9:a2:6d:26:5a:09:40:a1:3e:51:40:8b:da:57:37:9a: 8d:0e:d8:cf:c1:0a:b1:0b:95:53:05:41:29:39:af:93:9b:aa: 10:af:a1:6c $

For the incus-ui.pfx file, we first convert to the PEM format, then print the contents. The PFX file contains the certificate (the same that was added earlier to Incus) along with the private key.

$ openssl pkcs12 -in incus-ui.pfx -out incus-ui.pem -noenc Enter Import Password: $ cat incus-ui.pem Bag Attributes localKeyID: 3A 23 25 F7 56 4D 71 B8 FB FD 72 90 2D A1 F3 B8 2F 01 5E 92 friendlyName: Incus-UI subject=C = AU, ST = Some-State, O = Incus UI 10.10.10.98 (Browser Generated) issuer=C = AU, ST = Some-State, O = Incus UI 10.10.10.98 (Browser Generated) -----BEGIN CERTIFICATE----- MIIDMjCCAhqgAwIBAgIQARIAEQdlAAMAEABBAAQJETANBgkqhkiG9w0BAQUFADBV MQswCQYDVQQGEwJBVTETMBEGA1UECBMKU29tZS1TdGF0ZTExMC8GA1UEChMoSW5j dXMgVUkgMTAuMTAuMTAuOTggKEJyb3dzZXIgR2VuZXJhdGVkKTAeFw0yNDAzMjgy MTA4NThaFw0yNjEyMjMyMTA4NThaMFUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIEwpT b21lLVN0YXRlMTEwLwYDVQQKEyhJbmN1cyBVSSAxMC4xMC4xMC45OCAoQnJvd3Nl ciBHZW5lcmF0ZWQpMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzvgd Z+Gj9RoWtiZjjzJCmQ2vhosYSRpLjqto4QS6JN3mJ9XfehPPFrMzKIngq8jcwSoK 3u0mOnd03UIc4iL8paVowck7TRIVJ67GUOzc8Qq6AAyD0A0PgZBOMEPLRb/i6Rc5 QDuVi4sY6VlR/Jp6gORzs1S9/xx8gXUW4286VpsPo3NVRQPY+/M0TGBP8mefZuop KXhsZgXWfZbNDytLnHEsCW/itCPQXdD+sGqxWF7XtUeeqkc0+H3h7f6/lz2ZSUKv 4uWzxR5YsZgB248ln/jZAwIG+ZkKOqFwnf5kDcLYzPAcU+QxTHgSwv1yI2r0fkH5 1d9rrSxSKdB/62VkDwIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQAos1xIZIwjgt3i BWqdGN1D9Afmvh6At/kMDz3NuL17VX42bXQk1WmyJFE6LcWVaLXcJ9WD2bzL0P1V JGN9xmWb8bOd97ROuoPrv/XQ9pUte5BO04ms8Ifm+p326sJC8hUXdFzkPO0aQjzn BKplQj51XCSOUoUNS7Li7PpXSmg1S488E/wVCYBascjgIvVpJUtGi+C54Tr1DEDS w3WceZqqaJshNu1ny238vPALWisaTHNnxXm2J7lY0MfqhCG/9HxEEdeIqx3kU8kQ zea4WnqSc6ge/hwu3Oh+PemibSZaCUChPlFAi9pXN5qNDtjPwQqxC5VTBUEpOa+T m6oQr6Fs -----END CERTIFICATE----- Bag Attributes localKeyID: 3A 23 25 F7 56 4D 71 B8 FB FD 72 90 2D A1 F3 B8 2F 01 5E 92 friendlyName: Incus-UI Key Attributes: <No Attributes> -----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDO+B1n4aP1Gha2 JmOPMkKZDa+GixhJGkuOq2jhBLok3eYn1d96E88WszMoieCryNzBKgre7SY6d3Td QhziIvylpWjByTtNEhUnrsZQ7NzxCroADIPQDQ+BkE4wQ8tFv+LpFzlAO5WLixjp WVH8mnqA5HOzVL3/HHyBdRbjbzpWmw+jc1VFA9j78zRMYE/yZ59m6ikpeGxmBdZ9 ls0PK0uccSwJb+K0I9Bd0P6warFYXte1R56qRzT4feHt/r+XPZlJQq/i5bPFHlix mAHbjyWf+NkDAgb5mQo6oXCd/mQNwtjM8BxT5DFMeBLC/XIjavR+QfnV32utLFIp 0H/rZWQPAgMBAAECggEBAMm1N/tpBgC291F4YmlJg2xk0R8f6oA8V0zpMyKyF7Qc atWB8/Wm3pnx9bbZgRQKg1LiZYvTtgEfMM7+QuYFURMi/NB4DQpUyDdPd0mhPsbQ WVH8mnqA5HOzVL3/HHyBdRbjbzpWmw+jc1VFA9j78zRMYE/yZ59m6ikpeGxmBdZ9 +uKyZ4U4/TORu2tadg9frtUl1HhkY1zGAxOyJUbCOVIbZF2iQt5zMZt4XLFhKgwh jtDklc3dFIDigUZzpMgdLExLWi6CGT++cjJGpseM+QOAubSoCmT6eIs8qi9KpQhk aZYBerWqBxswkmNGK4Zh+5gFvdW7EmEp128hATgYZGECgYEA7ckh3qL4Jg6FQA8+ UeEoaT2CvDI89HMJfFN2NvU1ZklqP9aDnPvMjui/h/8HtDeb+5FWFZHF1B9laJp3 HnGGt+98/aO9skdFQDiszclDNIHdpSqcD2LWkKz84QTWqTTkRAxJpgnW91oURtyh WVH8mnqA5HOzVL3/HHyBdRbjbzpWmw+jc1VFA9j78zRMYE/yZ59m6ikpeGxmBdZ9 JSltWZtYemYzPTpZysocyRs5mD8CgYEA3tKviDreIR+TKT3FQoevyicXuwSn6ocH 2RTgJQF+Qyj+1ykQhwRQUD+axZGls5g2JgT+2gFIdUcAR9CN22rxLRbnIj645yGP Ka4dVhNAZnz/olWgs4onoO0CnOGXAkVdyiBe9H/D1dkj5bqAfY1eov6khPMOyrDF EXGi0e6uInbddI/sHUAAIIqJ4+knqwJIgxlzA9GFuzzt4oRLGMsoaClLYFCsrekJ SF/w7DvhoDQo+JIrHuGX4hLgFLWOgp2WMWhbvgZ0P1PWcJukZ/jx7rJmkwKBgGa5 7x75NMtEiU3sInMnpw2ltDUOUnO3SRD1pNiqtZE05zg+wFXe0UAN8sa+/QutUtl4 WVH8mnqA5HOzVL3/HHyBdRbjbzpWmw+jc1VFA9j78zRMYE/yZ59m6ikpeGxmBdZ9 WB4dlVAsKZ7yMVRFG2dUNb7997TnLd9jXDcArSIS4q/uliXvvZFdc2TsQ/hSDolP HzfNZ3XBo+EXeIFpmYW/rA13GQytLl5oDC28WaEhAoGBAL6acBqMflXUoWWVHZR7 0vNcJjtRTC13SGRoAKR/tT2kUqloz60bgWeVtggkFWTpPGgm6lmSuYvTnPeoHYDf vLibVFGasTk8Y7Aji0V7rF4O -----END PRIVATE KEY----- $ Troubleshooting Error: Unable to connect

You tried to access the IP address of the Incus server as (for example) https://192.168.1.10/ while you should have specified the IP address as well. The URL should look like https://192.168.1.10:8443/.

Error: Client sent an HTTP request to an HTTPS server

You tried to connect to the Incus server at an address (for example) http://192.168.1.10:8443/ but you omitted the s in https. Use https://192.168.1.10:8443/ instead.

Warning: Potential Security Risk Ahead

You are accessing the Incus server through the HTTPS address for the first time and the certificate has not been signed by a certification authority.

First attempt to access the Incus server over HTTPS with your browser.

Click on Advanced and select to Accept the risk and Continue. If you want to avoid this error message, you need to provide a server certificate that is accepted by your browser.

blog.simos.info/