-
Hi all, I have been trying to get set up over the weekend and have kept running into the following error when trying to set up a new VM: The only thing I have seen that seems similar is caused by an issue with using ssh versus using the built-in console (see tteck/Proxmox#276). Not sure if my issue is related to a recent change to the provider, proxmox, or just a beginner error :) Here is my provider file. Note that I am passing the credentials for the root@pam user for the node. terraform {
required_version = ">=1.5"
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "0.68.1"
}
}
}
provider "proxmox" {
endpoint = var.proxmox_uri
username = var.proxmox_user
password = var.proxmox_password
insecure = var.proxmox_insecure
tmp_dir = "/var/tmp"
ssh {
agent = true
}
} Here is my resource file data "local_file" "ssh_public_key" {
filename = var.ssh_pubkey_path
}
resource "proxmox_virtual_environment_download_file" "debian_12_bookworm_qcow2_img" {
content_type = "iso"
datastore_id = "local"
file_name = "debian-12-genericcloud-amd64.qcow2.img"
node_name = "pve"
url = "https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2"
}
resource "proxmox_virtual_environment_vm" "debian_test" {
name = "Podman"
description = "Managed by Terraform"
node_name = "pve"
tags = ["terraform", "podman", "debian"]
vm_id = 200
initialization {
ip_config {
ipv4 {
address = var.podman_config.ipv4
gateway = var.podman_config.gateway
}
}
user_account {
username = var.admin_account.username
password = var.admin_account.password
keys = [trimspace(data.local_file.ssh_public_key.content)]
}
}
disk {
datastore_id = "local-lvm"
file_id = proxmox_virtual_environment_download_file.debian_12_bookworm_qcow2_img.id
interface = "virtio0"
iothread = true
size = var.podman_config.disk_size
}
cdrom {
interface = "ide2"
}
network_device {
bridge = "vmbr0"
}
cpu {
cores = var.podman_config.cpu_cores
type = "x86-64-v2-AES"
}
memory {
dedicated = var.podman_config.memory
}
operating_system {
type = "l26"
}
} Does anything here look like I have done something wrong, or is this a potential bug? Thanks in advance for any guidance you can provide :) |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Hey @shinn5112 👋🏼 Looks like the shell the provider gets on the PVE node isn’t a Can you try SSHing into the PVE node directly from your terminal as Also, what’s the default shell for |
Beta Was this translation helpful? Give feedback.
-
Hey @bpg , Thanks for the help! Turns out I wasn't running with the root@pam user. I forgot to save the changes to my tfvars file, so I was running with a different pam account. I guess I had been looking at the screen too long and made a simple mistake :/ Again, thanks for the help, and happy holidays! :) |
Beta Was this translation helpful? Give feedback.
Hey @shinn5112 👋🏼
Looks like the shell the provider gets on the PVE node isn’t a
root
shell. Since you’re using an SSH agent, the shell might belong to a different user depending on your agent config, keys, or user setup. That user might not have access to/usr/sbin/pvesm
on the PVE node.Can you try SSHing into the PVE node directly from your terminal as
root
and runningpvesm
to see if it works?Also, what’s the default shell for
root
on your PVE node? The provider only works withbash
, so that could be part of the issue.