You are here

Agreguesi i feed

6.16.12: stable

Kernel Linux - Dje, 12/10/2025 - 1:01md
Version:6.16.12 (EOL) (stable) Released:2025-10-12 Source:linux-6.16.12.tar.xz PGP Signature:linux-6.16.12.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.16.12

6.12.52: longterm

Kernel Linux - Dje, 12/10/2025 - 12:59md
Version:6.12.52 (longterm) Released:2025-10-12 Source:linux-6.12.52.tar.xz PGP Signature:linux-6.12.52.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.12.52

6.6.111: longterm

Kernel Linux - Dje, 12/10/2025 - 12:56md
Version:6.6.111 (longterm) Released:2025-10-12 Source:linux-6.6.111.tar.xz PGP Signature:linux-6.6.111.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.6.111

In Copilot In Excel Demo, AI Told Teacher a 27% Exam Score Is of No Concern

Slashdot - Dje, 12/10/2025 - 5:37pd
A demo of educational AI-powered tools by a Microsoft product manager (in March of 2024) showed "how AI has the possibility to transform various job sectors and the education system," according to one report. But that demo "includes a segment on Copilot in Excel that is likely to resonate with AI-wary software developers," writes long-time Slashdot theodp: The Copilot in Excel segment purports to show how even teachers who were too "afraid of" or "intimidated" to use Excel in the past can now just use natural language prompts to conduct Excel analysis. But Copilot advises the teacher there are no 'outliers' in the exam scores for their 17 students, whose test scores range from 27%-100%. (This is apparently due to Copilot's choice of an inappropriate outlier detection method for this size population and score range). Fittingly, the student whose 27% score is confidently-but-incorrectly deemed to be of no concern by Copilot is named after Michael Scott, the largely incompetent and unprofessional boss of The Office. (Microsoft also named the other exam takers after characters from The Office). The additional Copilot student score "analysis" touted by Microsoft in the demo is also less than impressive. It includes: 1. A vertical bar chart that fails to convey the test score distribution that a histogram would have (a rookie chart choice mistake), 2. A horizontal bar chart of student scores that only displays every other student's name and shows no score values (a rookie formatting error)... So, will teachers — like programmers — be spending a significant amount of time in the future reviewing, editing, and refining the outputs of their AI agent helpers? "Not only does it illustrate how the realities of AI assistants sometimes fall maddeningly short of the promises," argues the original submission. "The demo also shows how AI vendors and customers alike sometimes forget to review promotional AI content closely in all the AI excitement!"

Read more of this story at Slashdot.

New Large Coral Reef Discovered Off Naples Containing Rare Ancient Corals

Slashdot - Dje, 12/10/2025 - 4:37pd
Off the southwest cost of Italy, a remotely operated submarine made "a significant and rare discovery," reports the Independent — a vast white coral reef that was 80 metres tall (262 feet) and 2 metres wide (6.56 feet) "containing important species and fossil traces." Often dubbed the "rainforests of the sea", coral reefs are of immense scientific interest due to their status as some of the planet's richest marine ecosystems, harbouring millions of species. They play a crucial role in sustaining marine life but are currently under considerable threat... hese impressive formations are composed of deep-water hard corals, commonly referred to as "white corals" because of their lack of colour, specifically identified as Lophelia pertusa and Madrepora oculata species. The reef also contains black corals, solitary corals, sponges, and other ecologically important species, as well as fossil traces of oysters and ancient corals, the Italian Research Council said. It called them "true geological testimonies of a distant past." Mission leader Giorgio Castellan said the finding was "exceptional for Italian seas: bioconstructions of this kind, and of such magnitude, had never been observed in the Dohrn Canyon, and are rarely seen elsewhere in our Mediterranean". The discovery will help scientists understand the ecological role of deep coral habitats and their distribution, especially in the context of conservation and restoration efforts, he added. The undersea research was funded by the EU. Thanks to davidone (Slashdot reader #12,252) for sharing the article.

Read more of this story at Slashdot.

'Tron: Ares' Mode Turns Teslas Into Glowing Light Cycles — Despite Bad Box Office

Slashdot - Dje, 12/10/2025 - 2:51pd
An anonymous reader shared this report from The Wrap Tesla this weekend introduced a new "Tron: Ares" mode, giving drivers an opportunity to turn their on-screen vehicles into the glowing Light Cycles that have been a big part of the Disney franchise since 1982. The optional update started rolling out on Friday, as Tron: Ares debuted in theaters. Tesla announced the update on X: "The grid has expanded to your Tesla — Tron: Ares update rolling out now." The feature is activated in Tesla's Toybox "infotainment" system, and turns the driver's vehicle avatar into a red Light Cycle. For drivers who have the "ambient lighting" feature, the mode will also expand the theme throughout the cabin. There was also a sleek black Tesla Optimus robot at the premier of Tron: Ares. Ironically, the Hollywood Reporter writes that by box office figures, "Tron is in big trouble," selling fewer tickets than expected (despite the movie's $180 million pre-marketing budget). While Tron's audience reviews gave it an 86% score on Rotten Tomatoes, its score with critics is just 57%. The Los Angeles Times says the movie "has glowing style, but its storytelling doesn't compute." (Or, as the New York Times puts it, "Who needs logic when you have neon?")

Read more of this story at Slashdot.

German State of Schlesiwg-Holstein Migrates To FOSS Groupware. Next Up: Linux OS

Slashdot - Dje, 12/10/2025 - 12:59pd
Long-time Slashdot reader Qbertino writes: German IT news outlet Heise reports [German-language article] that the northern most state Schleswig-Holstein has, after half a year of frantic data migration work, successfully migrated their MS Outlook mail and groupware setups to a FOSS solution using Open-Xchange and Thunderbird. Stakeholders consider the move a major success and milestone to digital sovereignty and saving costs. This move makes the state a pioneer in Germany. As a next major step Schleswig-Holstein plans to migrate their authorities and administrations desktop PCs to Linux.

Read more of this story at Slashdot.

New California Privacy Law Will Require Chrome/Edge/Safari to Offer Easy Opt-Outs for Data Sharing

Slashdot - Sht, 11/10/2025 - 10:38md
"California Governor Gavin Newsom signed the 'California Opt Me Out Act', which will require web browsers to include an easy, universal way for users to opt out of data collection and sales," reports the blog 9to5Mac: [The law] requires browsers to provide a clear, one-click mechanism for Californians to opt out of data sharing across websites. The bill reads: "A business shall not develop or maintain a browser that does not include functionality configurable by a consumer that enables the browser to send an opt-out preference signal to businesses with which the consumer interacts through the browser...." Californians will need patience, though, as the law doesn't take effect until January 1, 2027. Americans in some states — including California, Texas, Colorado, New Jersey and Maryland — "have the option to make those opt-out demands automatic whenever they surf the web," reports the Washington Post. "But they can only do so if they use small browsers that voluntarily offer that option, such as DuckDuckGo, Firefox and Brave. What's new in California's law is that all browsers must give people the same option." That means soon in California, just using Google's Chrome, Apple's Safari and Microsoft's Edge can command companies not to sell your data or pass it along for ad targeting... It's an imperfect but potent and simple way to flex privacy rights — and becomes even more powerful with another simple privacy measure in California. Starting on January 1, California residents can fill out an online form once to completely and repeatedly wipe their data from hundreds of data brokers that package your personal information for sale. But their article also suggests other ways readers can "try a one-click privacy option now." "[S]ome national companies respect one-click privacy opt-out requests from everyone... This happens automatically if you use DuckDuckGo and Brave. You need to change a setting with Firefox." "Download Privacy Badger: The software from the Electronic Frontier Foundation, a consumer privacy advocacy group, works in the background to order websites not to sell information they're collecting about you." "Use Permission Slip from Consumer Reports. Give the app basic information, and it will help you do much of the legwork to tell companies not to sell your information or to delete it, if you have the right to do so."

Read more of this story at Slashdot.

Bitcoin and Other Cryptocurrencies Had Double-Digit Drops Friday, Largest Liquidation Event Ever

Slashdot - Sht, 11/10/2025 - 9:38md
An anonymous reader shared this report from the Independent: Bitcoin and Ethereum both saw record liquidations as investors reacted to fears over a trade war, which saw many crypto investors move their money to stablecoins or safer assets... Bitcoin fell by more than 10 per cent to below $110,000, before recovering to $113,096 on Saturday morning. The value of Ethereum slumped by 11.2 per cent to $3,878. Other cryptocurrencies, including XRP, Doge and Ada, fell around 19 per cent, 27 per cent, and 25 per cent in the last 24 hours, respectively. LiveMint shares some statistics from Bloomberg: Citing 24-hour data from Coinglass, the report noted that more than $19 billion has been wiped out in the "largest liquidation event in crypto history", which impacted more than 1.6 million traders. It added that more than $7 billion of those positions were sold in less than one hour of trading on October 10. According to data on CoinMarketCap, the cryptocurrency market cap has dived to $3.74 trillion from the record-high $4.30 trillion level, the previous day. Trading volumes as of the market close were recorded at $490.23 billion. Bitcoin retreated on Friday, as US-China trade tensions reignited, after racing to record highs earlier in the week as persistent rate-cut bets and signs of some cooling in geopolitical tensions helped boost risk. Bitcoin was trading at $105,505.4 on Friday, down 13.15% on the day.

Read more of this story at Slashdot.

'Circular' AI Mega-Deals by AI and Hardware Giants are Raising Eyebrows

Slashdot - Sht, 11/10/2025 - 8:34md
"Nvidia is investing billions in and selling chips to OpenAI, which is also buying chips from and earning stock in AMD," writes SFGate. "AMD sells processors to Oracle, which is building data centers with OpenAI — which also gets data center work from CoreWeave. And that company is partially owned by, yes, Nvidia. "Taken together, it's a doozy." There are other collaborations and rivalries and many other factors at play, but OpenAI is the many-tentacled octopus in the middle, spinning its achievement of ChatGPT into a blitz of speculative investments. "We are in a phase of the build-out where the entire industry's got to come together and everybody's going to do super well," OpenAI CEO Sam Altman told the Wall Street Journal on Monday. "You'll see this on chips. You'll see this on data centers. You'll see this lower down the supply chain...." Some worry that the more closely companies intertwine, the more susceptible they are to creating a bubble, or a market not actually supported by real consumer demand. "You don't have to be a skeptic about AI technology's promise in general to see this announcement as a troubling signal about how self-referential the entire space has become," Bespoke Investment Group wrote in a note to clients, per CNBC. "If NVDA has to provide the capital that becomes its revenues in order to maintain growth, the whole ecosystem may be unsustainable..." Also, even with Nvidia's investment, AMD's shares and OpenAI's repeated fundraises, the ChatGPT-maker doesn't have the cash to meet all of these vast commitments. And if OpenAI's soaring projections about demand for AI computing don't bear out, there will be a lot of committed money — and a large share of the stock market — that would see its foundations topple. Thanks to long-time Slashdot reader mspohr for sharing the news.

Read more of this story at Slashdot.

Peter Hutterer: Why is my device a touchpad and a mouse and a keyboard?

Planet GNOME - Mër, 20/08/2025 - 1:12md

If you have spent any time around HID devices under Linux (for example if you are an avid mouse, touchpad or keyboard user) then you may have noticed that your single physical device actually shows up as multiple device nodes (for free! and nothing happens for free these days!). If you haven't noticed this, run libinput record and you may be part of the lucky roughly 50% who get free extra event nodes.

The pattern is always the same. Assuming you have a device named FooBar ExceptionalDog 2000 AI[1] what you will see are multiple devices /dev/input/event0: FooBar ExceptionalDog 2000 AI Mouse /dev/input/event1: FooBar ExceptionalDog 2000 AI Keybard /dev/input/event2: FooBar ExceptionalDog 2000 AI Consumer Control The Mouse/Keyboard/Consumer Control/... suffixes are a quirk of the kernel's HID implementation which splits out a device based on the Application Collection. [2]

A HID report descriptor may use collections to group things together. A "Physical Collection" indicates "these things are (on) the same physical thingy". A "Logical Collection" indicates "these things belong together". And you can of course nest these things near-indefinitely so e.g. a logical collection inside a physical collection is a common thing.

An "Application Collection" is a high-level abstractions to group something together so it can be detected by software. The "something" is defined by the HID usage for this collection. For example, you'll never guess what this device might be based on the hid-recorder output: # 0x05, 0x01, // Usage Page (Generic Desktop) 0 # 0x09, 0x06, // Usage (Keyboard) 2 # 0xa1, 0x01, // Collection (Application) 4 ... # 0xc0, // End Collection 74 Yep, it's a keyboard. Pop the champagne[3] and hooray, you deserve it.

The kernel, ever eager to help, takes top-level application collections (i.e. those not inside another collection) and applies a usage-specific suffix to the device. For the above Generic Desktop/Keyboard usage you get "Keyboard", the other ones currently supported are "Keypad" and "Mouse" as well as the slightly more niche "System Control", "Consumer Control" and "Wireless Radio Control" and "System Multi Axis". In the Digitizer usage page we have "Stylus", "Pen", "Touchscreen" and "Touchpad". Any other Application Collection is currently unsuffixed (though see [2] again, e.g. the hid-uclogic driver uses "Touch Strip" and other suffixes).

This suffix is necessary because the kernel also splits out the data sent within each collection as separate evdev event node. Since HID is (mostly) hidden from userspace this makes it much easier for userspace to identify different devices because you can look at a event node and say "well, it has buttons and x/y, so must be a mouse" (this is exactly what udev does when applying the various ID_INPUT properties, with varying levels of success).

The side effect of this however is that your device may show up as multiple devices and most of those extra devices will never send events. Sometimes that is due to the device supporting multiple modes (e.g. a touchpad may by default emulate a mouse for backwards compatibility but once the kernel toggles it to touchpad mode the mouse feature is mute). Sometimes it's just laziness when vendors re-use the same firmware and leave unused bits in place.

It's largely a cosmetic problem only, e.g. libinput treats every event node as individual device and if there is a device that never sends events it won't affect the other event nodes. It can cause user confusion though: "why does my laptop say there's a mouse?" and in some cases it can cause functional degradation - the two I can immediately recall are udev detecting the mouse node of a touchpad as pointing stick (because i2c mice aren't a thing), hence the pointing stick configuration may show up in unexpected places. And fake mouse devices prevent features like "disable touchpad if a mouse is plugged in" from working correctly. At the moment we don't have a good solution for detecting these fake devices - short of shipping giant databases with product-specific entries we cannot easily detect which device is fake. After all, a Keyboard node on a gaming mouse may only send events if the user configured the firmware to send keyboard events, and the same is true for a Mouse node on a gaming keyboard.

So for now, the only solution to those is a per-user udev rule to ignore a device. If we ever figure out a better fix, expect to find a gloating blog post in this very space.

[1] input device naming is typically bonkers, so I'm just sticking with precedence here
[2] if there's a custom kernel driver this may not apply and there are quirks to change this so this isn't true for all devices
[3] or sparkling wine, let's not be regionist here

Thibault Martin: Cloud tech makes sense on-prem too

Planet GNOME - Mër, 20/08/2025 - 12:00md

In the previous post, we talked about the importance to have a flexible homelab with Proxmox, and set it up. Long story short, I only have a single physical server but I like to experiment with new setups regularly. Proxmox is a baremetal hypervisor: a piece of software that lets me spin up Virtual Machines on top of my server, to act as mini servers.

Thanks to this set-up I can have a long-lived VM for my (single node) production k3s cluster, and I can spin up disposable VMs to experiment with, without impacting my production.

But it's more complex to install Proxmox, spin up a VM, and install k3s on it, as compared to just installing Debian and k3s on my baremetal server. We have already automated the Proxmox install process. Let's now automate the VM provisioning and deploy k3s on it, to make it simple and easy to re-provision a fully functional Virtual Machine on top of Proxmox!

In this post we will configure opentofu so it can ask Proxmox to spin up a new VM, use cloud-init to do the basic pre-configuration of the VM, and use ansible to deploy k3s on it.

Provisioning and pre-configuring a VM

OpenTofu is software I execute on my laptop. It reads file describing what I want to provision, and performs the actual provisioning. I can use it to say "I want a VM with 4 vCPUs and 8GB of RAM on this Proxmox cluster," or "I want to add this A record to my DNS managed by Cloudflare." I need to write this down in .tf files, and invoke the tofu CLI to read those files and apply the changes.

Opentofu is quite flexible. It can connect to many different providers (e.g. Proxmox, AWS, Scaleway, Hetzner...) to spin up a variety of resources (e.g. a VM on Proxmox, an EC2 or EKS instance or AWS, etc). Proxmox, Amazon and other providers publish Provider plugins for opentofu, available in the OpenTofu Registry (and in the Terraform Registry since opentofu is backward compatible for now).

Configuring Opentofu for Proxmox

To use Opentofu with Proxmox, you need to pick and configure an Opentofu Provider for Proxmox. There seem to be two active implementations:

The former seems to have better test coverage, and friends have used it for months without a problem. I am taking a leap of faith and picking it.

The plugin needs to be configured so opentofu on my laptop can talk to Proxmox and spin up new VMs. To do so, I need to create a Proxmox service account that opentofu will use, so opentofu has sufficient privileges to create the VMs I ask it to create.

I will rely on the pveum (Proxmox Virtual Environment User Management) utility to create a role with the right privileges, create a new user/service account, and assign the role to the service account.

Once ssh'd into the Proxmox host, I can create the terraform user that opentofu will use

# pveum user add terraform@pve

[!info] I don't have to add a password

I will issue an API Key for opentofu to authenticate as this user. Not having a password reduces the attack surface by ensuring nobody can use this service account to log into the web UI.

Then let's create the role

# pveum role add Terraform -privs "Datastore.Allocate \ Datastore.AllocateSpace \ Datastore.AllocateTemplate \ Datastore.Audit \ Pool.Allocate \ Sys.Audit \ Sys.Console \ Sys.Modify \ VM.Allocate \ VM.Audit \ VM.Clone \ VM.Config.CDROM \ VM.Config.Cloudinit \ VM.Config.CPU \ VM.Config.Disk \ VM.Config.HWType \ VM.Config.Memory \ VM.Config.Network \ VM.Config.Options \ VM.Console \ VM.Migrate \ VM.PowerMgmt \ SDN.Use"

And now let's assign the role to the user

# pveum aclmod / -user terraform@pve -role Terraform

Finally I can create an API token

# pveum user token add terraform@pve provider --privsep=0 ┌──────────────┬──────────────────────────────────────┐ │ key │ value │ ╞══════════════╪══════════════════════════════════════╡ │ full-tokenid │ terraform@pve!provider │ ├──────────────┼──────────────────────────────────────┤ │ info │ {"privsep":"0"} │ ├──────────────┼──────────────────────────────────────┤ │ value │ REDACTED │ └──────────────┴──────────────────────────────────────┘

I now have a service account up and ready. Let's create a ~/Projects/infra/tofu folder that will contain my whole infrastructure's opentofu file. In that folder, I will create a providers.tf file to declare and configure the various providers I need. For now, this will only be Proxmox. I can configure my Proxmox provider so it knows where the API endpoint is, and what API key to use.

terraform { required_providers { proxmox = { source = "bpg/proxmox" version = "0.80.0" } } } provider "proxmox" { endpoint = "https://192.168.1.220:8006/" api_token = "terraform@pve!provider=REDACTED" }

For some operations, including VM provisioning, the API is not enough and the Proxmox provider needs to ssh into the Proxmox host to issue commands. I can configure the Proxmox provider to use my ssh agent.

This way, when I call the tofu command on my laptop to provision VMs on the Proxmox host, the provider will use the ssh-agent of my laptop to authenticate against the Proxmox host. This will make opentofu use my ssh keypair to authenticate. Since my ssh key is already trusted by the Proxmox host, opentofu will be able to log in seamlessly.

provider "proxmox" { endpoint = "https://192.168.1.220:8006/" api_token = "terraform@pve!provider=REDACTED" insecure = true ssh { agent = true username = "root" } }

[!warning] Insecure but still somewhat secure

We add an insecure line to our configuration. It instructs opentofu to skip the TLS verification of the certificate presented by the Proxmox host. We do this because Proxmox generates a self-signed certificate our computer doesn't trust. We will understand what this means and fix that in a further blog post.

The main risk we're facing by doing so is to let another machine impersonate our Proxmox host. Since we're working on a homelab, in a home network, the chances of it happening are extraordinarily low, and this it can be considered a temporarily acceptable.

After moving to the tofu directory, running tofu init will install the provider

$ tofu init Initializing the backend... Initializing provider plugins... - Finding bpg/proxmox versions matching "0.80.0"... - Installing bpg/proxmox v0.80.0... - Installed bpg/proxmox v0.80.0 (signed, key ID F0582AD6AE97C188)

And a tofu plan shouldn't return an error

$ tofu plan No changes. Your infrastructure matches the configuration. OpenTofu has compared your real infrastructure against your configuration and found no differences, so no changes are needed. Removing sensitive information

If you made it so far, you probably think I am completely reckless for storing credentials in plain text files, and you would be correct to think so. Credentials should never be stored in plain text. Fortunately opentofu can grab sensitive credentials from environment variables.

I use Bitwarden to store my production credentials and pass them to opentofu when I step into my work directory. You can find all the details on how to do it on this previous blog post. Bear in mind that this works well for a homelab but I wouldn't recommend it for a production setup.

We need to create a new credential in the Infra folder of our vault, and call it PROXMOX_VE_API_TOKEN. Its content is the following

terraform@pve!provider=yourApiKeyGoesHere

Then we need to sync the vault managed by the bitwarden CLI, to ensure it has the credential we just added.

$ bw sync Syncing complete.

Let's update our ~/Projects/infra/.direnv to make it retrieve the PROXMOX_VE_API_TOKEN environment variable when we step into our work directory.

bitwarden_password_to_env Infra PROXMOX_VE_API_TOKEN

And let's make direnv allow it

$ direnv allow ~/Projects/infra/

We can now remove the credential from tofu/providers.tf

provider "proxmox" { endpoint = "https://192.168.1.220:8006/" api_token = "terraform@pve!provider=REDACTED" insecure = true ssh { agent = true username = "root" } } Spinning up a new VM

Now I have a working proxmox provider for opentofu, it's time to spin up a first VM! I already use Debian for my Proxmox host, I'm familiar with Debian, it's very stable, and it has a reactive security team. I want to keep track of as few operating systems (OS) as possible, so whenever possible I will use it as the base OS for my VMs.

When I spin up a new VM, I can also pre-configure a few settings with cloud-init. Cloud-init defines standard files that my VM will read on first boot. Those files contain various instructions: I can use them to give a static IP to my VM, create a user, add it to the sudoers without a password, and add a ssh key to let me perform key-based authentication with ssh.

I need to use a "cloud image" of Debian for it to support cloud-init file. I can grab the link on Debian's official Download page. I could upload it manually to Proxmox, but we're here to make things tidy and reproducible! So let's create a tofu/cloud-images.tf file where we will tell opentofu to ask the Proxmox node to download the file.

resource "proxmox_virtual_environment_download_file" "debian_13_cloud_image" { content_type = "iso" datastore_id = "local" node_name = "proximighty" url = "https://cloud.debian.org/images/cloud/trixie/latest/debian-13-generic-amd64.qcow2" file_name = "debian-13-generic-amd64.img" }

[!info] No include needed!

Opentofu merges all the files in the root of a directory into a single file before processing it. There is no need to include/import our tofu/providers.tf file into tofu/cloud-images.tf!

Let's run a tofu plan to see what opentofu would do.

$ tofu plan OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create OpenTofu will perform the following actions: # proxmox_virtual_environment_download_file.debian_13_cloud_image will be created + resource "proxmox_virtual_environment_download_file" "debian_13_cloud_image" { + content_type = "iso" + datastore_id = "local" + file_name = "debian-13-generic-amd64.img" + id = (known after apply) + node_name = "proximighty" + overwrite = true + overwrite_unmanaged = false + size = (known after apply) + upload_timeout = 600 + url = "https://cloud.debian.org/images/cloud/trixie/latest/debian-13-generic-amd64.qcow2" + verify = true } Plan: 1 to add, 0 to change, 0 to destroy.

Everything looks alright, let's apply it to actually make the Proxmox host download the image!

$ tofu apply OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create OpenTofu will perform the following actions: # proxmox_virtual_environment_download_file.debian_13_cloud_image will be created + resource "proxmox_virtual_environment_download_file" "debian_13_cloud_image" { + content_type = "iso" + datastore_id = "local" + file_name = "debian-13-generic-amd64.img" + id = (known after apply) + node_name = "proximighty" + overwrite = true + overwrite_unmanaged = false + size = (known after apply) + upload_timeout = 600 + url = "https://cloud.debian.org/images/cloud/trixie/latest/debian-13-generic-amd64.qcow2" + verify = true } Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? OpenTofu will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes proxmox_virtual_environment_download_file.debian_13_cloud_image: Creating... proxmox_virtual_environment_download_file.debian_13_cloud_image: Creation complete after 8s [id=local:iso/debian-13-generic-amd64.img] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Looking at the Proxmox UI I can see that the image has indeed been downloaded

Excellent! Now we can describe the parameters of the virtual machine we wan to create by creating a tofu/k3s-main.tf file that contains a virtual_environment_vm resource like so

resource "proxmox_virtual_environment_vm" "k3s-main" { name = "k3s-main" description = "Production k3s' main VM" tags = ["production", "k3s", "debian"] node_name = "proximighty" }

This is the meta-data of our VM, giving it a name and a Proxmox node to run onto. But we need to be more specific. Let's give it 4 CPUs, and 16 GB of RAM.

resource "proxmox_virtual_environment_vm" "k3s-main" { name = "k3s-main" description = "Production k3s' main VM" tags = ["production", "k3s", "debian"] node_name = "proximighty" cpu { cores = 4 type = "x86-64-v4" } memory { dedicated = 16384 floating = 16384 # set equal to dedicated to enable ballooning } }

To figure out the type of cpu to use for your VM, issue the following command on the Proxmox host

$ /lib64/ld-linux-x86-64.so.2 --help [...] Subdirectories of glibc-hwcaps directories, in priority order: x86-64-v4 (supported, searched) x86-64-v3 (supported, searched) x86-64-v2 (supported, searched) [...]

We can then give it a 50 GB disk with the disk block.

resource "proxmox_virtual_environment_vm" "k3s-main" { name = "k3s-main" [...] disk { datastore_id = "local" interface = "virtio0" iothread = true size = 50 file_id = proxmox_virtual_environment_download_file.debian_13_cloud_image.id } }

I use the local datastore and not lvm-thin despite running QEMU because I don't want to allow over-provisioning. lvm-thin would allow me to allocate a disk of 500 GB to my VM, even if I only have 100 GB available, because the VM will only fill the Proxmox drive with the actual content it uses. You can read more about storage on Proxmox's wiki.

I use a virtio device, since a colleague told me "virtio uses a special communication channel that requires guest drivers, that are well supported out of the box on Linux. You can be way faster when the guest knows it's a VM and don't have to emulate something that was intended for actual real hardware. It's the same for your network interface and a bunch of other things. Usually if there is a virtio option you want to use that"

I set the file_id to the Debian cloud image we downloaded earlier.

I can then add a network interface that will use the vmbr0 bridge I created when setting up my Proxmox host. I also need an empty serial_device, or Debian crashes.

resource "proxmox_virtual_environment_vm" "k3s-main" { name = "k3s-main" [...] network_device { bridge = "vmbr0" } serial_device {} }

Now, instead of spinning up an un-configured VM and manually retrieving the parameters set during boot, we will use cloud-init to pre-configure it. We will do the following:

  • Configure the network to get a static IP
  • Configure the hostname
  • Configure the timezone to UTC
  • Add a user thib that doesn't have a password
  • Add thib to sudoers, without a password
  • Add my the public ssh key from my laptop to the trust keys of thib on the VM, so I can login with a ssh key and never have to use a password.

The documentation of the Proxmox provider teach us that that Proxmox has native support for cloud-init. This cloud-init configuration is done in the initialization block of the virtual_environment_vm resource.

We will first give it an IP

resource "proxmox_virtual_environment_vm" "k3s-main" { name = "k3s-main" [...] initialization { datastore_id = "local" ip_config { ipv4 { address = "192.168.1.221/24" gateway = "192.168.1.254" } } } }

I'm only specifying the datastore_id because by default it uses local-lvm, which I have not configured on my Proxmox host.

To create a user and give it ssh keys I could use the user_account block inside initialization. Unfortunately it doesn't support adding the user to sudoers, nor installing extra packages. To circumvent that limitation I will have to create a user config data file and pass it to cloud-init.

Let's start by creating the user config data file resource within tofu/k3s-main.tf

resource "proxmox_virtual_environment_file" "user_data_cloud_config" { content_type = "snippets" datastore_id = "local" node_name = "proximighty" source_raw { data = <<-EOF #cloud-config hostname: mightykube timezone: UTC users: - default - name: thib lock_passwd: true groups: - sudo shell: /bin/bash ssh_authorized_keys: - ${trimspace("ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIGC++vbMTrSbQFKFgthj9oLaW1z5fCkQtlPCnG6eObB thib@ergaster.org")} sudo: ALL=(ALL) NOPASSWD:ALL - name: root lock_passwd: true package_update: true package_upgrade: true packages: - htop - qemu-guest-agent - vim runcmd: - systemctl enable qemu-guest-agent - systemctl start qemu-guest-agent - echo "done" > /tmp/cloud-config.done EOF file_name = "user-data-cloud-config.yaml" } }

It's a bit inelegant to keep a copy of my ssh key inside this file. Let's ask opentofu to read it from the actual file on my laptop instead by creating a local_file resource for it, in tofu/k3s.tf

data "local_file" "ssh_public_key" { filename = "/Users/thibaultmartin/.ssh/id_ed25519.pub" }

If I try to plan the change, I get the following error

$ tofu plan ╷ │ Error: Inconsistent dependency lock file │ │ The following dependency selections recorded in the lock file are inconsistent with the │ current configuration: │ - provider registry.opentofu.org/hashicorp/local: required by this configuration but no version is selected │ │ To update the locked dependency selections to match a changed configuration, run: │ tofu init -upgrade

Like the error message says, I can fix it with tofu init

$ tofu init -upgrade Initializing the backend... Initializing provider plugins... - Finding opentofu/cloudflare versions matching "5.7.1"... - Finding bpg/proxmox versions matching "0.80.0"... - Finding latest version of hashicorp/local... - Installing hashicorp/local v2.5.3... - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) - Using previously-installed opentofu/cloudflare v5.7.1 - Using previously-installed bpg/proxmox v0.80.0 Providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://opentofu.org/docs/cli/plugins/signing/ OpenTofu has made some changes to the provider dependency selections recorded in the .terraform.lock.hcl file. Review those changes and commit them to your version control system if they represent changes you intended to make. OpenTofu has been successfully initialized! [...]

I can now change the user_data_cloud_config resource to reference ssh_public_key

resource "proxmox_virtual_environment_file" "user_data_cloud_config" { [...] ssh_authorized_keys: - ${trimspace("ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIGC++vbMTrSbQFKFgthj9oLaW1z5fCkQtlPCnG6eObB thib@ergaster.org")} - ${trimspace(data.local_file.ssh_public_key.content)} [...] }

Now let's update the initialization block of k3s-main to use that cloud-init file

resource "proxmox_virtual_environment_vm" "k3s-main" { name = "k3s-main" [...] initialization { datastore_id = "local" ip_config { ipv4 { address = "192.168.1.221/24" gateway = "192.168.1.254" } } user_data_file_id = proxmox_virtual_environment_file.user_data_cloud_config.id } }

We can check with tofu plan that everything is alright, and then actually apply the plan with tofu apply

$ tofu apply data.local_file.ssh_public_key: Reading... data.local_file.ssh_public_key: Read complete after 0s [id=930cea05ae5e662573618e0d9f3e03920196cc5f] proxmox_virtual_environment_file.user_data_cloud_config: Refreshing state... [id=local:snippets/user-data-cloud-config.yaml] proxmox_virtual_environment_download_file.debian_13_cloud_image: Refreshing state... [id=local:iso/debian-13-generic-amd64.img] OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create OpenTofu will perform the following actions: # proxmox_virtual_environment_vm.k3s-main will be created + resource "proxmox_virtual_environment_vm" "k3s-main" { [...] } Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? OpenTofu will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes proxmox_virtual_environment_vm.k3s-main: Creating... proxmox_virtual_environment_vm.k3s-main: Creation complete after 5s [id=100] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Looking at Proxmox's console, I can see that the VM was created, it healthy, and I can even see that it has the mightykube hostname I had created for it.

Now I can try to ssh into the newly created VM from my laptop

$ ssh thib@192.168.1.221 The authenticity of host '192.168.1.221 (192.168.1.221)' can't be established. ED25519 key fingerprint is SHA256:39Qocnshj+JMyt4ABpD9ZIjDpOHhXqdet94QeSh+uDo. This key is not known by any other names. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '192.168.1.221' (ED25519) to the list of known hosts. Linux mightykube 6.12.41+deb13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.41-1 (2025-08-12) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. thib@mightykube:~$

Let's check that I can perform actions as root without being prompted for a password

thib@mightykube:~$ sudo apt update Get:1 file:/etc/apt/mirrors/debian.list Mirrorlist [30 B] Get:2 file:/etc/apt/mirrors/debian-security.list Mirrorlist [39 B] Hit:3 https://deb.debian.org/debian trixie InRelease Hit:4 https://deb.debian.org/debian trixie-updates InRelease Hit:5 https://deb.debian.org/debian trixie-backports InRelease Hit:6 https://deb.debian.org/debian-security trixie-security InRelease Reading package lists... Done Building dependency tree... Done Reading state information... Done All packages are up to date.

Brilliant! Just like that, I have a VM on Proxmox, with a static IP address, a well-known user, ssh key authentication and no password to manage at all!

[!warning] No password means no password!

When creating the VM, cloud-init creates a user but doesn't give it a password. It means we can only rely on SSH to control the VM. If we lose our SSH key or mess up with the sshd config and can't ssh into the VM, we're (kind of) locked out!

We have access to the VM console via Proxmox, but without a password we can't log into it. It is possible to rescue it by booting a live system and chrooting into our actual system, but it can be tedious. We'll cover that in a future blog post.

We still have credentials in our .tf files so we can't commit them yet. We will extract the credentials a bit later, but first let's refactor our files for clarity.

A single place to attribute IPs

Since all my VMs will get a static IP, I want to make sure I keep a tidy list of all the IPs already used. This will help avoid IP clashes. Let's create a new ips.tf file to keep track of everything

locals { reserved_ips = { proxmox_host = "192.168.1.220/24" k3s_main = "192.168.1.221/24" gateway = "192.168.1.254" } }

When spinning up the VM for the main k3s node, I will be able to refer to the local.reserved_ips.k3s_main local variable. So let's update the tofu/k3s-main.tf file accordingly!

resource "proxmox_virtual_environment_vm" "k3s-main" { name = "k3s-main" description = "Production k3s' main VM" tags = ["production", "k3s", "debian"] node_name = "proximighty" [...] initialization { datastore_id = "local" ip_config { ipv4 { address = "192.168.1.221/24" gateway = "192.168.1.254" address = local.reserved_ips.k3s_main gateway = local.reserved_ips.gateway } } user_data_file_id = proxmox_virtual_environment_file.user_data_cloud_config.id } }

We now have a single file to allocate IPs to virtual machines. We can see at a glance whether an IP is already used or not. That should save us some trouble! Let's now have a look at the precautions we need to take to save our files with git.

Keeping a safe copy of our state What is tofu state

We used opentofu to describe what resources we wanted to create. Let's remove the resource "proxmox_virtual_environment_vm" "k3s-main" we have created, and run tofu plan to see how opentofu would react to that.

$ tofu plan data.local_file.ssh_public_key: Reading... data.local_file.ssh_public_key: Read complete after 0s [id=930cea05ae5e662573618e0d9f3e03920196cc5f] proxmox_virtual_environment_download_file.debian_13_cloud_image: Refreshing state... [id=local:iso/debian-13-generic-amd64.img] proxmox_virtual_environment_file.user_data_cloud_config: Refreshing state... [id=local:snippets/user-data-cloud-config.yaml] proxmox_virtual_environment_vm.k3s-main: Refreshing state... [id=100] OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - destroy OpenTofu will perform the following actions: # proxmox_virtual_environment_vm.k3s-main will be destroyed # (because proxmox_virtual_environment_vm.k3s-main is not in configuration) - resource "proxmox_virtual_environment_vm" "k3s-main" { [...] } Plan: 0 to add, 0 to change, 1 to destroy.

If I remove a resource block, opentofu will try to delete it. But it might not be aware of other VMs I could have deployed. Hang on but that might be dangerous! If I already had 3 VMs running on Proxmox and started using opentofu after that, would it destroy them all, since I didn't describe them in my files?!

Fortunately for us, no. Opentofu needs to know what it is in charge of, and leave the rest alone. When I provision something via opentofu, it adds it to a local inventory of all the things it manages. That inventory is called a state file and looks like the following (prettified via jq)

{ "version": 4, "terraform_version": "1.10.5", "serial": 13, "lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67", "outputs": {}, "resources": [ { "mode": "data", "type": "local_file", "name": "ssh_public_key", "provider": "provider[\"registry.opentofu.org/hashicorp/local\"]", "instances": [ { "schema_version": 0, "attributes": { "content": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIGC++vbMTrSbQFKFgthj9oLaW1z5fCkQtlPCnG6eObB thib@ergaster.org\n", "content_base64": "c3NoLWVkMjU1MTkgQUFBQUMzTnphQzFsWkRJMU5URTVBQUFBSUlHQysrdmJNVHJTYlFGS0ZndGhqOW9MYVcxejVmQ2tRdGxQQ25HNmVPYkIgdGhpYkBlcmdhc3Rlci5vcmcK", "content_base64sha256": "YjQvgHA99AXWCaKLep6phGgdlmkZHvXU3OOhRSsQvms=", "content_base64sha512": "tRp4/iG90wX0R1SghdvXwND8Hg6ADNuMMdPXANUYDa2uIjkRkLRgK5YPK6ACz5cbW+SbqvGPGzYpWNNFLGIFpQ==", "content_md5": "ed5ee6428ea7c048fe8019bb1a2206b3", "content_sha1": "930cea05ae5e662573618e0d9f3e03920196cc5f", "content_sha256": "62342f80703df405d609a28b7a9ea984681d9669191ef5d4dce3a1452b10be6b", "content_sha512": "b51a78fe21bdd305f44754a085dbd7c0d0fc1e0e800cdb8c31d3d700d5180dadae22391190b4602b960f2ba002cf971b5be49baaf18f1b362958d3452c6205a5", "filename": "/Users/thibaultmartin/.ssh/id_ed25519.pub", "id": "930cea05ae5e662573618e0d9f3e03920196cc5f" }, "sensitive_attributes": [] } ] }, [...] ], "check_results": null }

The tofu state is a local representation of what opentofu manages. It's absolutely mandatory for opentofu to work: this is how opentofu knows if it needs to deploy, update, or tear down resources. So we need to keep in a safe place, and my laptop is not a safe place at all. It can fail or get stolen. Since the state is a text based file, I could use git to keep remote copies of it.

But as you can see, the tofu state file contains my public key, that it read from a local file. The state file contains a structured view of what is in the .tf files it manages. So far we have not added any sensitive credentials, but we might do it and not realize they will end up in state, and thus on a git repo.

Fortunately, opentofu comes with tools that let us encrypt the state, so we can commit it to a remote git repository with more peace of mind.

Encrypting the tofu state

Before encrypting our state, the opentofu documentation has an important section to read so you understand what it entails.

We need to migrate our unencrypted plan to an encrypted one. Let's bear in mind that there's no way back if we screw up, so let's make a backup first (and delete it when we're done). Note that a properly encrypted state can be migrated to a decrypted one. A botched encrypted state will likely be irrecoverable. Let's just copy it in a different directory

$ cd ~/Project/infra/tofu $ mkdir ~/tfbackups $ cp terraform.tfstate{,.backup} ~/tfbackups/

To encrypt our state, we need to choose an encryption method: as a single admin homelabber I'm going for the simpler and sturdier method. I don't want to depend on extra infrastructure for secrets management, so I'm using PBKDF2, which roughly means "generating an encryption key from a long passphrase."

With that in mind, let's follow the documentation to migrate a pre-existing project. Let's open our providers.tf file and add an encryption block within the terraform one.

terraform { encryption { method "unencrypted" "migrate" {} key_provider "pbkdf2" "password_key" { passphrase = "REDACTED" } method "aes_gcm" "password_based" { keys = key_provider.pbkdf2.password_key } state { method = method.aes_gcm.password_based fallback { method = method.unencrypted.migrate } } } required_providers { proxmox = { source = "bpg/proxmox" version = "0.80.0" } } } provider "proxmox" { endpoint = "https://192.168.1.220:8006/" ssh { agent = true username = "root" } }

This block instructs terraform to encrypt the state with a key generated from our password. It also tells it to expect a pre-existing unencrypted state to exist, that it's okay to read and encrypt it.

Note that I've used the encryption passphrase directly in that block. We will move it to a safer place later, but for now let's keep things simple.

Let's now apply this plan to see if our state gets encrypted correctly, but make sure you do have a cleartext backup first.

$ cd ~/Projects/infra/tofu $ tofu apply

After the apply, we can have a look at the terraform.tfstate file to check that it has indeed been encrypted.

{ "serial": 13, "lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67", "meta": { "key_provider.pbkdf2.password_key": "eyJzYWx0[...]" }, "encrypted_data": "ONXZsJhz[...]", "encryption_version": "v0" }

I know that opentofu people probably know what they're doing, but I don't like that password_key field. It starts with eyJ, so that must be a base64 encoded json object. Let's decode that

$ echo "eyJzYWx0[...]" | base64 -d {"salt":"jzGRZLVANeFJpJRxj8RXg48FfOoB++GF/Honm6sIF9Y=","iterations":600000,"hash_function":"sha512","key_length":32}

All good, it's just the salt, iterations, hash function and key length parameters. Those are pretty much public, we can commit the file to our repo! But... what about the terraform.tfstate.backup file? Let's examine this one

{ "version": 4, "terraform_version": "1.10.5", "serial": 12, "lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67", "outputs": {}, "resources": [ { "mode": "data", "type": "local_file", "name": "ssh_public_key", "provider": "provider[\"registry.opentofu.org/hashicorp/local\"]", "instances": [ { "schema_version": 0, "attributes": { "content": "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIGC++vbMTrSbQFKFgthj9oLaW1z5fCkQtlPCnG6eObB thib@ergaster.org\n", [...] } ] }, [...] ], "check_results": null }

Oh dear! That one is not encrypted! I didn't find any utility for it since terraform can't do "rollbacks", and I couldn't find docs for it. I've deleted the file and I could still perform tofu apply without a problem. The next iterations should be encrypted, but I will add it to my .gitignore just in case!

We're not quite ready to commit our files though. We still have a secret in plain text! We give away the encryption key we use. Let's extract it into an environment variable so we don't leak it.

Removing sensitive information

We need to create a new credential in the Infra folder of our vault, and call it TF_ENCRYPTION. Its content is the following

key_provider "pbkdf2" "password_key" { passphrase = "yourPassphraseGoesHere" }

Then we need to sync the vault managed by the bitwarden CLI, to ensure it has the credential we just added.

$ bw sync Syncing complete.

Let's update our ~/Projects/infra/.direnv to make it retrieve the TF_ENCRYPTION environment variable

bitwarden_password_to_env Infra PROXMOX_VE_API_TOKEN TF_ENCRYPTION

And let's make direnv allow it

$ direnv allow ~/Projects/infra/

Let's remove the block that provided our password from the encryption block in providers.tf

terraform { encryption { method "unencrypted" "migrate" {} key_provider "pbkdf2" "password_key" { passphrase = "REDACTED" } method "aes_gcm" "password_based" { keys = key_provider.pbkdf2.password_key } state { method = method.aes_gcm.password_based fallback { method = method.unencrypted.migrate } } } [...] }

And let's try a tofu plan to confirm that opentofu could read the passphrase from the environment variable

$ tofu plan ╷ │ Warning: Unencrypted method configured │ │ on line 0: │ (source code not available) │ │ Method unencrypted is present in configuration. This is a security risk and │ should only be enabled during migrations. ╵ data.local_file.ssh_public_key: Reading... data.local_file.ssh_public_key: Read complete after 0s [id=930cea05ae5e662573618e0d9f3e03920196cc5f] proxmox_virtual_environment_download_file.debian_13_cloud_image: Refreshing state... [id=local:iso/debian-13-generic-amd64.img] proxmox_virtual_environment_file.user_data_cloud_config: Refreshing state... [id=local:snippets/user-data-cloud-config.yaml] proxmox_virtual_environment_vm.k3s-main: Refreshing state... [id=100] No changes. Your infrastructure matches the configuration. OpenTofu has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Brilliant! We can now also remove the migrate and fallback blocks so opentofu doesn't trust unencrypted content at all, which will prevent malicious actors from tampering with our file.

terraform { encryption { method "unencrypted" "migrate" {} method "aes_gcm" "password_based" { keys = key_provider.pbkdf2.password_key } state { method = method.aes_gcm.password_based fallback { method = method.unencrypted.migrate } } } [...] }

Finally we can delete our cleartext backup

$ rm -Rf ~/tfbackups

Voilà, we have an encrypted state that we can push to a remote Github repository, and our state will be reasonably safe for today's standards!

Fully configuring and managing the VM

As we've seen when setting up the Proxmox host, ansible can be used to put a machine in a desired state. It can write a playbook to install k3s and copy the kubeconfig file to my admin laptop.

Then there's the question of: how do we make opentofu (who provisions the VMs) and ansible (who deploys services on the VMs) talk to each other? In an ideal world, I would tell opentofu to provision the VM, and then to run an ansible playbook on the hosts it has created.

There's an ansible opentofu provider that's supposed to play this role. I didn't find intuitive to use, and most people around me told me they found it so cumbersome they didn't use it. There is a more flexible and sturdy solution: ansible dynamic inventories!

Creating a dynamic inventory for k3s VMs

Ansible supports creating inventories by calling plugins that will retrieve information from sources. The Proxmox inventory source plugin lets ansible query Proxmox and retrieve information about VMs, and automatically group them together.

Hang on. Are we really going to create a dynamic inventory for a single VM? I know we're over engineering things for the sake of learning, but isn't it a bit too much? As always, it's important to consider what problem we're trying to solve. To me, we're solving two different problems:

  1. We make sure that there is a single canonical source of truth, and it is opentofu. The IP defined in opentofu, it's the one provisioned on Proxmox, and it's the one the dynamic inventory will use to perform operations on the VM. If the VM needs to change its IP, we only have to update it in opentofu, and ansible will follow along.
  2. We build a sane foundation for more complex setups. It will be easy to extend when deploying more VMs to run complex clusters, while not adding unnecessary complexity.

So let's start by making sure we have the Proxmox plugin installed. It is part of the community.general collection on ansible-galaxy, so let's install it

$ ansible-galaxy collection install community.general

Then in the ~/Projects/infra/ansible/inventory directory, we can create a proximighty.proxmox.yaml. The file has to end with .proxmox.yaml for the Proxmox plugin to work.

plugin: community.general.proxmox url: https://192.168.1.200:8006 user: "terraform@pve" token_id: "provider" token_secret: "REDACTED"

Let's break it down:

  • plugin tells ansible to use the Proxmox inventory source plugin.
  • url is the URL of the Proxmox cluster.
  • user is the Proxmox user we authenticate as. Here I'm reusing the same value as the service account we have created for opentofu.
  • token_id is the ID of the token we have issued for the user. I'm also reusing the same value as the API Key we have created for opentofu.
  • token_secret is the password for the API Key. Here again I'm reusing the same value as the API Key we have created for opentofu. I'm writting it in the plain text file for now, we will clean it up later.

Now we can try to pass that dynamic inventory configuration to ansible for it to build an inventory from Proxmox.

$ ansible-inventory -i proximighty.proxmox.yaml --list [WARNING]: * Failed to parse /Users/thibaultmartin/Projects/infra/ansible/inventory/proximighty.proxmox.yaml with auto plugin: HTTPSConnectionPool(host='192.168.1.200', port=8006): Max retries exceeded with url: /api2/json/nodes (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1028)'))) [WARNING]: * Failed to parse /Users/thibaultmartin/Projects/infra/ansible/inventory/proximighty.proxmox.yaml with yaml plugin: Plugin configuration YAML file, not YAML inventory [WARNING]: * Failed to parse /Users/thibaultmartin/Projects/infra/ansible/inventory/proximighty.proxmox.yaml with ini plugin: Invalid host pattern 'plugin:' supplied, ending in ':' is not allowed, this character is reserved to provide a port. [WARNING]: Unable to parse /Users/thibaultmartin/Projects/infra/ansible/inventory/proximighty.proxmox.yaml as an inventory source [WARNING]: No inventory was parsed, only implicit localhost is available { "_meta": { "hostvars": {} }, "all": { "children": [ "ungrouped" ] } }

And it fails! This is unfortunately not a surprise. We asked the plugin to look up into Proxmox and gave it a https URL. But when Proxmox runs for the first time, it generates a self-signed certificate. It is a perfectly fine certificate we can use to handle https requests. The only problem is that our laptop doesn't trust the Proxmox host, who signed the certificate for itself.

The good news is that Proxmox can retrieve certificates signed by authorities our laptop trusts! The bad news is that we need to understand what we're doing to do it properly. Like earlier, when we configured the Proxmox provider for opentofu, let's ask the Proxmox plugin to use the certificate even if it doesn't trust the authority who signed it. Since we're in a homelab on a home network, the risk of accidentally reaching a host that impersonates our Proxmox host is still fairly low, so it's acceptable to temporarily take this risk here again.

Let's add the following line to our dynamic inventory configuration to ignore the certificate signature

plugin: community.general.proxmox url: https://192.168.1.200:8006 user: "terraform@pve" token_id: "provider" token_secret: "REDACTED" validate_certs: false

And now, running the inventory command again

$ cd ~/Projects/infra/ansible/inventory $ ansible-inventory -i proximighty.proxmox.yaml --list { "_meta": { "hostvars": {} }, "all": { "children": [ "ungrouped", "proxmox_all_lxc", "proxmox_all_qemu", "proxmox_all_running", "proxmox_all_stopped", "proxmox_nodes", "proxmox_proximighty_lxc", "proxmox_proximighty_qemu" ] }, "proxmox_all_qemu": { "hosts": [ "k3s-main" ] }, "proxmox_all_running": { "hosts": [ "k3s-main" ] }, "proxmox_nodes": { "hosts": [ "proximighty" ] }, "proxmox_proximighty_qemu": { "hosts": [ "k3s-main" ] } }

Great! We can see that our k3s-main VM appears! We didn't learn a lot about it though. Let's ask the Proxmox plugin to give us more information about the VMs with the want_facts parameter

plugin: community.general.proxmox url: https://192.168.1.200:8006 user: "terraform@pve" token_id: "provider" token_secret: "REDACTED" want_facts: true validate_certs: false

Let's run it again and see if we get more interesting results

$ cd ~/Projects/infra/ansible/inventory $ ansible-inventory -i proximighty.proxmox.yaml --list { "_meta": { "hostvars": { "k3s-main": { "proxmox_acpi": 1, "proxmox_agent": { "enabled": "0", "fstrim_cloned_disks": "0", "type": "virtio" }, "proxmox_balloon": 16384, "proxmox_bios": "seabios", "proxmox_boot": { "order": "virtio0;net0" }, "proxmox_cicustom": { "user": "local:snippets/user-data-cloud-config.yaml" }, "proxmox_cores": 4, "proxmox_cpu": { "cputype": "x86-64-v4" }, "proxmox_cpuunits": 1024, "proxmox_description": "Production k3s' main VM", "proxmox_digest": "b68508152b464627d06cba6505ed195aa3d34f59", "proxmox_ide2": { "disk_image": "local:100/vm-100-cloudinit.qcow2", "media": "cdrom" }, "proxmox_ipconfig0": { "gw": "192.168.1.254", "ip": "192.168.1.221/24" }, "proxmox_keyboard": "en-us", "proxmox_memory": "16384", "proxmox_meta": { "creation-qemu": "9.2.0", "ctime": "1753547614" }, "proxmox_name": "k3s-main", "proxmox_net0": { "bridge": "vmbr0", "firewall": "0", "virtio": "BC:24:11:A6:96:8B" }, "proxmox_node": "proximighty", "proxmox_numa": 0, "proxmox_onboot": 1, "proxmox_ostype": "other", "proxmox_protection": 0, "proxmox_qmpstatus": "running", "proxmox_scsihw": { "disk_image": "virtio-scsi-pci" }, "proxmox_serial0": "socket", "proxmox_smbios1": { "uuid": "0d47f7c8-e0b4-4302-be03-64aa931a4c4e" }, "proxmox_snapshots": [], "proxmox_sockets": 1, "proxmox_status": "running", "proxmox_tablet": 1, "proxmox_tags": "debian;k3s;production", "proxmox_tags_parsed": [ "debian", "k3s", "production" ], "proxmox_template": 0, "proxmox_virtio0": { "aio": "io_uring", "backup": "1", "cache": "none", "discard": "ignore", "disk_image": "local:100/vm-100-disk-0.qcow2", "iothread": "1", "replicate": "1", "size": "500G" }, "proxmox_vmgenid": "e00a2059-1310-4b0b-87f7-7818e7cdb9ae", "proxmox_vmid": 100, "proxmox_vmtype": "qemu" } } }, "all": { "children": [ "ungrouped", "proxmox_all_lxc", "proxmox_all_qemu", "proxmox_all_running", "proxmox_all_stopped", "proxmox_nodes", "proxmox_proximighty_lxc", "proxmox_proximighty_qemu" ] }, "proxmox_all_qemu": { "hosts": [ "k3s-main" ] }, "proxmox_all_running": { "hosts": [ "k3s-main" ] }, "proxmox_nodes": { "hosts": [ "proximighty" ] }, "proxmox_proximighty_qemu": { "hosts": [ "k3s-main" ] } }

That's a tonne of information! Probably more than we need, and we still don't know how to connect to a specific host. Let's add some order into that. First, let's group all the VMs that have k3s in their tags under a an ansible group called k3s

plugin: community.general.proxmox url: https://192.168.1.200:8006 user: "terraform@pve" token_id: "provider" token_secret: "REDACTED" want_facts: true groups: k3s: "'k3s' in (proxmox_tags_parsed|list)" validate_certs: false

And now let's tell ansible how to figure out what IP to use for a host. Since we are the ones provisioning the VMs, we know for sure that we have configured them to use a static IP, on the single virtual network interface we gave them.

Let's use the compose parameter to populate an ansible_host variable that contains the IP of the VM.

plugin: community.general.proxmox url: https://192.168.1.200:8006 user: "terraform@pve" token_id: "provider" token_secret: "REDACT" want_facts: true groups: k3s: "'k3s' in (proxmox_tags_parsed|list)" compose: ansible_host: proxmox_ipconfig0.ip | default(proxmox_net0.ip) | ansible.utils.ipaddr('address') validate_certs: false

And finally let's test this again

$ cd ~/Projects/infra/ansible/inventory ansible-inventory -i proximighty.proxmox.yaml --list { "_meta": { "hostvars": { "k3s-main": { "ansible_host": "192.168.1.221", "proxmox_acpi": 1, "proxmox_agent": { "enabled": "0", "fstrim_cloned_disks": "0", "type": "virtio" }, [...] } } }, "all": { "children": [ "ungrouped", "proxmox_all_lxc", [...] "k3s" ] }, "k3s": { "hosts": [ "k3s-main" ] }, [...] }

Brilliant! We now have a k3s group, that contains our single k3s-main VM, and it's been able to retrieve its IP successfully! Let's create a simple playbook to try to execute on the VM one command that works and one that doesn't.

Let's create a ~/Projects/infra/ansible/k3s/test.yaml

--- - name: Execute commands on the k3s host hosts: k3s remote_user: thib tasks: - name: Echo on the remote server ansible.builtin.command: echo "It worked" changed_when: false - name: Get k3s installed version ansible.builtin.command: k3s --version register: k3s_version_output changed_when: false ignore_errors: true

The only two notable things here are

  • hosts is the name of the group we created in the dynamic inventory
  • remote_user is the user I have pre-configured via cloud-init when spinning up the VM
$ cd $ ansible-playbook -i inventory/proximighty.proxmox.yaml k3s/test.yaml PLAY [Execute commands on the k3s host] ******************************************** TASK [Gathering Facts] ************************************************************* ok: [k3s-main] TASK [Echo on the remote server] *************************************************** ok: [k3s-main] TASK [Get k3s installed version] *************************************************** fatal: [k3s-main]: FAILED! => {"changed": false, "cmd": "k3s --version", "msg": "[Errno 2] No such file or directory: b'k3s'", "rc": 2, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} ...ignoring PLAY RECAP ************************************************************************* k3s-main : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1

It works! Now that we know how to build an inventory based on Proxmox tags and could make a simply ansible playbook use it, let's move forward and actually deploy k3s on our VM!

Deploying k3s on the k3s-main VM

The k3s maintainers have created a k3s-ansible playbook that can preconfigure a machine to ensure it will be ready to make k3s run, and deploy a single or multi-node cluster. It's a great playbook and it's important not to reinvent the wheel. But I also like to understand what I execute, and keep things minimal to limit the risk of breakage.

Let's take inspiration from this excellent playbook to build one tailored for our (very simple) needs: deploying k3s-server on a single node. When trying to install k3s server via the playbook on Debian, it executes 2 roles:

  • prereq that performs a series of checks to ensure k3s can be installed and run well
  • k3s_server that downloads, preconfigures and install k3s

We know that the OS powering our virtual machine is always going to be a Debian cloud image. None of the checks in prereq are useful for a fresh vanilla Debian stable. So let's skip it entirely.

Let's have a closer look at what k3s_server does, and carry the important bits over to our playbook. We want to

  1. Check whether k3s is already installed so we don't override it if it was already installed
  2. Download the install script
  3. Execute the install script to download k3s
  4. Create a systemd service for k3s to start automatically
  5. Enable the service
  6. Copy the kubeconfig file generated by k3s to our laptop, and merge it with our kubeconfig under the cluster name and context mightykube

To do things cleanly, we will create a k3s_server role in a k3s directory.

--- - name: Get k3s installed version ansible.builtin.command: k3s --version register: k3s_server_version_output changed_when: false ignore_errors: true - name: Set k3s installed version when: not ansible_check_mode and k3s_server_version_output.rc == 0 ansible.builtin.set_fact: k3s_server_installed_version: "{{ k3s_server_version_output.stdout_lines[0].split(' ')[2] }}" - name: Download and execute k3s installer if k3s is not already installed when: not ansible_check_mode and (k3s_server_version_output.rc != 0 or k3s_server_installed_version is version(k3s_server_version, '<')) block: - name: Download K3s install script ansible.builtin.get_url: url: https://get.k3s.io/ timeout: 120 dest: /usr/local/bin/k3s-install.sh owner: root group: root mode: "0755" - name: Install K3s binary ansible.builtin.command: cmd: /usr/local/bin/k3s-install.sh environment: INSTALL_K3S_SKIP_START: "true" INSTALL_K3S_VERSION: "{{ k3s_server_version }}" changed_when: true - name: Copy K3s service file [Single] ansible.builtin.template: src: "k3s-single.service.j2" dest: "/etc/systemd/system/k3s.service" owner: root group: root mode: "0644" register: k3s_server_service_file_single - name: Enable and check K3s service ansible.builtin.systemd: name: k3s daemon_reload: true state: started enabled: true - name: Check whether kubectl is installed on control node ansible.builtin.command: 'kubectl' register: k3s_server_kubectl_installed ignore_errors: true delegate_to: 127.0.0.1 become: false changed_when: false # Copy the k3s config to a second file to detect changes. # If no changes are found, we can skip copying the kubeconfig to the control node. - name: Copy k3s.yaml to second file ansible.builtin.copy: src: /etc/rancher/k3s/k3s.yaml dest: /etc/rancher/k3s/k3s-copy.yaml mode: "0600" remote_src: true register: k3s_server_k3s_yaml_file_copy - name: Apply k3s kubeconfig to control node if file has change and control node has kubectl installed when: - k3s_server_kubectl_installed.rc == 0 - k3s_server_k3s_yaml_file_copy.changed block: - name: Copy kubeconfig to control node ansible.builtin.fetch: src: /etc/rancher/k3s/k3s.yaml dest: "~/.kube/config.new" flat: true - name: Change server address in kubeconfig on control node ansible.builtin.shell: | KUBECONFIG=~/.kube/config.new kubectl config set-cluster default --server=https://{{ hostvars[groups['k3s'][0]]['ansible_host'] }}:6443 delegate_to: 127.0.0.1 become: false register: k3s_server_csa_result changed_when: - k3s_server_csa_result.rc == 0 - name: Setup kubeconfig context on control node - mightykube ansible.builtin.replace: path: "~/.kube/config.new" regexp: 'default' replace: 'mightykube' delegate_to: 127.0.0.1 become: false - name: Merge with any existing kubeconfig on control node ansible.builtin.shell: | TFILE=$(mktemp) KUBECONFIG=~/.kube/config.new:~/.kube/config kubectl config set-context mightykube --user=mightykube --cluster=mightykube KUBECONFIG=~/.kube/config.new:~/.kube/config kubectl config view --flatten > ${TFILE} mv ${TFILE} ~/.kube/config delegate_to: 127.0.0.1 become: false register: k3s_server_mv_result changed_when: - k3s_server_mv_result.rc == 0

Let's also create a template file for the systemd service under ~/Projects/infra/ansible/k3s/roles/k3s_server/templates

[Unit] Description=Lightweight Kubernetes Documentation=https://k3s.io Wants=network-online.target After=network-online.target [Install] WantedBy=multi-user.target [Service] Type=notify EnvironmentFile=-/etc/default/%N EnvironmentFile=-/etc/sysconfig/%N EnvironmentFile=-/etc/systemd/system/k3s.service.env KillMode=process Delegate=yes # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=1048576 LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity TimeoutStartSec=0 Restart=always RestartSec=5s ExecStartPre=-/sbin/modprobe br_netfilter ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/local/bin/k3s server

Let's create a ~/Projects/infra/ansible/k3s/k3s_server/default/main.yaml to set the version of k3s we want to install, and that we might change in the future when doing upgrades.

k3s_server_version: "v1.33.3+k3s1"

Finally, let's create a ~/Projects/infra/ansible/k3s/deploy.yaml that calls the role we just created on the k3s servers group.

--- - name: Install k3s hosts: k3s remote_user: thib tasks: - name: Install k3s server ansible.builtin.import_role: name: k3s_server

We can now use everything together by calling the playbook we created (and the role it calls) with the dynamic inventory generated by the Proxmox plugin. Let's try!

$ cd ~/Projects/infra/ansible $ ansible-playbook -i inventory/proximighty.proxmox.yaml k3s/deploy.yaml

Using kubectl on my laptop, I can confirm that my single node cluster is ready

$ kubectl get nodes NAME STATUS ROLES AGE VERSION mightykube Ready control-plane,master 4m v1.33.3+k3s1

Great! We have used ansible to automate the installation of a single node k3s cluster, and how to control it from our laptop. Thanks to our dynamic inventory, ansible also figured out what VM to install it onto automatically. Look, Mom! We have a Kubernetes at home!

It's time to clean things up and remove sensitive credentials from our ansible scripts.

Removing sensitive information

When writing our ansible playbook, we didn't add new credentials. When setting up opentofu we created an API Key, and stored it in our Bitwarden vault under the name PROXMOX_VE_API_TOKEN. When configuring the dynamic inventory, we reused that same API key but wrote it in the plain text file.

There is a minor difference though. Opentofu uses the API Key formatted as

terraform@pve!provider=REDACTED

Ansible on the other hand uses the API Key formatted as

user: "terraform@pve" token_id: "provider" token_secret: "REDACTED"

The information is the same, but formatted differently. Fortunately for us, ansible supports searching strings with regular expressions. The regex to break it down into the three parts we need is rather simple:

([^!]+)!([^=]+)=(.+)

Ansible also has a lookup method to read environment variables. Let's put all the pieces together in our dynamic inventory file

plugin: community.general.proxmox url: https://192.168.1.200:8006 user: 'terraform@pve' token_id: 'provider' token_secret: 'REDACTED' user: "{{ (lookup('ansible.builtin.env', 'PROXMOX_VE_API_TOKEN') | regex_search('([^!]+)!([^=]+)=(.+)', '\\1'))[0] }}" token_id: "{{ (lookup('ansible.builtin.env', 'PROXMOX_VE_API_TOKEN') | regex_search('([^!]+)!([^=]+)=(.+)', '\\2'))[0] }}" token_secret: "{{ (lookup('ansible.builtin.env', 'PROXMOX_VE_API_TOKEN') | regex_search('([^!]+)!([^=]+)=(.+)', '\\3'))[0] }}" want_facts: true groups: k3s: "'k3s' in (proxmox_tags_parsed|list)" compose: ansible_host: proxmox_ipconfig0.ip | default(proxmox_net0.ip) | ansible.utils.ipaddr('address') validate_certs: false

Voilà! Just like that we have removed the secrets from our files, and we're ready to commit them!

[!info] Why not use the Bitwarden plugin for ansible?

It's a good alternative, but I already rely on direnv to extract the relevant secrets from my vault and store them temporarily in environment variables.

Using the Bitwarden plugin in my playbook would tightly couple the playbook to ansible. By relying on the environment variables, only direnv is coupled to Bitwarden!

Now we can spin up a VM and install k3s on it in a handful of seconds! Our homelab is making steady progress. Next up we will see how to get services running on our cluster with GitOps!

A huge thanks to my friends and colleagues Davide, Quentin, Ark, Half-Shot, and Ben!

Faqet

Subscribe to AlbLinux agreguesi