- Published on
Install Proxmox virtual machines with Terraform
- Authors
- Name
- Thomas Brandstetter
A few words about Terraform
If you have read my last blog post you already know how to create templates for further provisioning using the Cloud-Init specification. In this post I like to show you how you easily deploy your infrastructure using Terraform on the virtualisation solution Proxmox. First of all, what is Terraform exactly? Terraform is a so called "Infrastructure as a code" software developed by Hashicorp - a company which also created Vagrant and other famous tools for professional cloud solutions. According to Microsoft,
Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model, using the same versioning as DevOps team uses for source code.
What's the benefit you may ask. One of the biggest advantages is that you don't have to install your servers and other components manually. Instead, with just a few lines of code you're able to manage your complete infrastructure. This automation step brings your stability, quality improvements, resilience and time for more important things. Furthermore, you have the possibility to integrate your Terraform configuration into your existing VCS and automatically run changes through your infrastructure with a proper CI/CD pipeline.
Another good question - why use Terraform if you've already have Ansible in place? Have you ever tried to remove a package in Ansible? If you just remove it from you playbook configuration it is still on your server until you write the right configuration (the attribute "absent" is your friend) or remove it manualy. Terraform takes care on this changes and has a clever dependency management for "First this, then that" decisions. Writing about the differences between the two would hijack my "Terraform practicing" article, so to keep a long story short:
- Use Terraform for all your infrastructure stuff (VMs and the baseline software on it)
- Use Ansible for the configuration management (Installation of Applications and the configuration of it)
Let's start with practicing
Assumptions
- You have already a Proxmox server installed (it works with other virtualisation and cloud providers too - but this article is about Proxmox and Terraform)
- You have already OS-templates with Cloud-init defined (if not, please read my blog post about it)
1. Installation of Terraform
You can install Terraform on the major plattforms using either the package manager/app store from the chosen system or download the binaries from Hashicorp directly. On OSX I used Homebrew, which is a package manager for OSX.
brew install terraform
After the installation let's check the Version of your Terraform Version. Please use Version 12.x upwards, because there have been many changes between 11.x and 12.x
terraform version
2. Installation of the Terraform Proxmox provider
The Proxmox provider is not a default one - a so called 3rd party provider. Thus, it's necessary to install it manually. As prerequisite you have to install the Golang progamming language on your system. Golang is available on all major plattforms. For the installation I used Homebrew again.
brew install go
With Go installed you're able to compile the provider using the commands
go get github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provider-proxmox
go get github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provisioner-proxmox
sudo cp $GOPATH/bin/terraform-provider-proxmox ~/.terraform.d/plugins
sudo cp $GOPATH/bin/terraform-provisioner-proxmox ~/.terraform.d/plugins
Configure your first Terraform project
We have installed the Terraform software in step 1 and the necessary Proxmox provider in step 2. Now we're going to configure our first Terraform infrastructure which runs on Proxmox. In my example, I like to install a small Kubernetes infrastructure which should run K3s (a lightweight Kubernetes distribution) later on. For that we have to define the following environment:
- 1 x Master Server
- 2 x Node Server
- 1 x Storage Server (for persistent storage)
Configure the Proxmox provider
First, we configure the connection settings for the Proxmox provider. For better readability of our infrastructure code, we split variables and provider in two different configuration files named variables.tf
and provider.tf
variable "pm_api_url" {
default = "https://"ip of your proxmox server":8006/api2/json"
}
variable "pm_user" {
default = "root@pam"
}
variable "pm_password" {
default = "3277db3c"
}
provider "proxmox" {
pm_parallel = 1
pm_tls_insecure = true
pm_api_url = var.pm_api_url
pm_password = var.pm_password
pm_user = var.pm_user
}
Configure the virtual machines
Next step is the main configuration of our k3s-cluster server. Here you have to adapt the following attributes according to your configuration:
- target_node (the name of your Proxmox instance)
- name (the name of the virtual server)
- clone (the name of the template in Proxmox)
- cores
- memory
- storage (the right storage pool in Proxmox)
- ipconfig0 (Use the right IP range for your servers - the count.index is necessary if you have more then one server configured - like the k3s_agents in the example below)
The "ignore changes" lifecycle block is necessary, because Terraform likes to change the mac address on the second run - maybe a problem in the Proxmox provider - see here: https://github.com/Telmate/terraform-provider-proxmox/issues/112
resource "proxmox_vm_qemu" "k3s_server" {
count = 1
name = "kubernetes-master-${count.index}"
target_node = "homelab"
clone = "ubuntu-1804LTS-template"
os_type = "cloud-init"
cores = 4
sockets = "1"
cpu = "host"
memory = 1024
scsihw = "virtio-scsi-pci"
bootdisk = "scsi0"
disk {
id = 0
size = 20
type = "scsi"
storage = "data2"
storage_type = "lvm"
iothread = true
}
network {
id = 0
model = "virtio"
bridge = "vmbr0"
}
lifecycle {
ignore_changes = [
network,
]
}
# Cloud Init Settings
ipconfig0 = "ip=192.168.2.11${count.index + 1}/24,gw=192.168.2.1"
sshkeys = <<EOF
${var.ssh_key}
EOF
}
resource "proxmox_vm_qemu" "k3s_agent" {
count = 2
name = "kubernetes-node-${count.index}"
target_node = "homelab"
clone = "ubuntu-1804LTS-template"
os_type = "cloud-init"
cores = 4
sockets = "1"
cpu = "host"
memory = 1024
scsihw = "virtio-scsi-pci"
bootdisk = "scsi0"
disk {
id = 0
size = 20
type = "scsi"
storage = "data2"
storage_type = "lvm"
iothread = true
}
network {
id = 0
model = "virtio"
bridge = "vmbr0"
}
lifecycle {
ignore_changes = [
network,
]
}
# Cloud Init Settings
ipconfig0 = "ip=192.168.2.12${count.index + 1}/24,gw=192.168.2.1"
sshkeys = <<EOF
${var.ssh_key}
EOF
}
resource "proxmox_vm_qemu" "storage" {
count = 1
name = "storage-node-${count.index}"
target_node = "homelab"
clone = "ubuntu-1804LTS-template"
os_type = "cloud-init"
cores = 4
sockets = "1"
cpu = "host"
memory = 1024
scsihw = "virtio-scsi-pci"
bootdisk = "scsi0"
disk {
id = 0
size = 20
type = "scsi"
storage = "data2"
storage_type = "lvm"
iothread = true
}
network {
id = 0
model = "virtio"
bridge = "vmbr0"
}
lifecycle {
ignore_changes = [
network,
]
}
# Cloud Init Settings
ipconfig0 = "ip=192.168.2.13${count.index + 1}/24,gw=192.168.2.1"
sshkeys = <<EOF
${var.ssh_key}
EOF
}
Add ssh-pubkey for Cloud-Init
To get passwordless login (useful for tools like Ansible), create a variable with your ssh_key in the variables.tf file.
variable "ssh_key" {
default = "ssh-rsa ..."
}
Deployment Time
Terraform has a simple but powerful deployment cycle, which consists of the following steps:
- Init - Initializes the Terraform project and install needed plugins, dependencies...
- Validate - Validates the syntax of the created Terraform .tf files
- Plan - Calculates the steps and changes to install/upgrade your infrastructure
- Apply - Applies the changes on the configured systems
If you try to skip a step for example start with terraform plan, Terraform inform you to initialize the project first:
Error: Could not satisfy plugin requirements
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints from each module, run "terraform providers".
Error: provider.proxmox: new or changed plugin executable
So let's start with the initialization first.
terraform init
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
And Terraform informs you about the next step. But instead we like to check if our configuration ist correct:
terraform validate
Success! The configuration is valid.
Seems we did a good job during our configuration. Now it's the time to see what Terraform likes to deploy:
terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# proxmox_vm_qemu.k3s_agent[0] will be created
+ resource "proxmox_vm_qemu" "k3s_agent" {
+ agent = 0
+ balloon = 0
+ boot = "cdn"
+ bootdisk = "scsi0"
+ clone = "ubuntu-1804LTS-template"
+ clone_wait = 15
+ cores = 4
+ cpu = "host"
+ force_create = false
+ full_clone = true
+ hotplug = "network,disk,usb"
+ id = (known after apply)
+ ipconfig0 = "ip=192.168.2.121/24,gw=192.168.2.1"
+ memory = 1024
+ name = "kubernetes-node-0"
+ numa = false
+ onboot = true
+ os_type = "cloud-init"
+ preprovision = true
+ scsihw = "virtio-scsi-pci"
+ sockets = 1
+ ssh_host = (known after apply)
+ ssh_port = (known after apply)
+ sshkeys = <<~EOT
ssh-rsa ...
EOT
+ target_node = "homelab"
+ vcpus = 0
+ vlan = -1
+ disk {
+ backup = false
+ cache = "none"
+ format = "raw"
+ id = 0
+ iothread = true
+ mbps = 0
+ mbps_rd = 0
+ mbps_rd_max = 0
+ mbps_wr = 0
+ mbps_wr_max = 0
+ replicate = false
+ size = "20"
+ storage = "data2"
+ storage_type = "lvm"
+ type = "scsi"
}
+ network {
+ bridge = "vmbr0"
+ firewall = false
+ id = 0
+ link_down = false
+ model = "virtio"
+ queues = -1
+ rate = -1
+ tag = -1
}
}
# proxmox_vm_qemu.k3s_agent[1] will be created
+ resource "proxmox_vm_qemu" "k3s_agent" {
+ agent = 0
+ balloon = 0
+ boot = "cdn"
+ bootdisk = "scsi0"
+ clone = "ubuntu-1804LTS-template"
+ clone_wait = 15
+ cores = 4
+ cpu = "host"
+ force_create = false
+ full_clone = true
+ hotplug = "network,disk,usb"
+ id = (known after apply)
+ ipconfig0 = "ip=192.168.2.122/24,gw=192.168.2.1"
+ memory = 1024
+ name = "kubernetes-node-1"
+ numa = false
+ onboot = true
+ os_type = "cloud-init"
+ preprovision = true
+ scsihw = "virtio-scsi-pci"
+ sockets = 1
+ ssh_host = (known after apply)
+ ssh_port = (known after apply)
+ sshkeys = <<~EOT
ssh-rsa ...
EOT
+ target_node = "homelab"
+ vcpus = 0
+ vlan = -1
+ disk {
+ backup = false
+ cache = "none"
+ format = "raw"
+ id = 0
+ iothread = true
+ mbps = 0
+ mbps_rd = 0
+ mbps_rd_max = 0
+ mbps_wr = 0
+ mbps_wr_max = 0
+ replicate = false
+ size = "20"
+ storage = "data2"
+ storage_type = "lvm"
+ type = "scsi"
}
+ network {
+ bridge = "vmbr0"
+ firewall = false
+ id = 0
+ link_down = false
+ model = "virtio"
+ queues = -1
+ rate = -1
+ tag = -1
}
}
# proxmox_vm_qemu.k3s_server[0] will be created
+ resource "proxmox_vm_qemu" "k3s_server" {
+ agent = 0
+ balloon = 0
+ boot = "cdn"
+ bootdisk = "scsi0"
+ clone = "ubuntu-1804LTS-template"
+ clone_wait = 15
+ cores = 4
+ cpu = "host"
+ force_create = false
+ full_clone = true
+ hotplug = "network,disk,usb"
+ id = (known after apply)
+ ipconfig0 = "ip=192.168.2.111/24,gw=192.168.2.1"
+ memory = 1024
+ name = "kubernetes-master-0"
+ numa = false
+ onboot = true
+ os_type = "cloud-init"
+ preprovision = true
+ scsihw = "virtio-scsi-pci"
+ sockets = 1
+ ssh_host = (known after apply)
+ ssh_port = (known after apply)
+ sshkeys = <<~EOT
ssh-rsa ...
EOT
+ target_node = "homelab"
+ vcpus = 0
+ vlan = -1
+ disk {
+ backup = false
+ cache = "none"
+ format = "raw"
+ id = 0
+ iothread = true
+ mbps = 0
+ mbps_rd = 0
+ mbps_rd_max = 0
+ mbps_wr = 0
+ mbps_wr_max = 0
+ replicate = false
+ size = "20"
+ storage = "data2"
+ storage_type = "lvm"
+ type = "scsi"
}
+ network {
+ bridge = "vmbr0"
+ firewall = false
+ id = 0
+ link_down = false
+ model = "virtio"
+ queues = -1
+ rate = -1
+ tag = -1
}
}
# proxmox_vm_qemu.storage[0] will be created
+ resource "proxmox_vm_qemu" "storage" {
+ agent = 0
+ balloon = 0
+ boot = "cdn"
+ bootdisk = "scsi0"
+ clone = "ubuntu-1804LTS-template"
+ clone_wait = 15
+ cores = 4
+ cpu = "host"
+ force_create = false
+ full_clone = true
+ hotplug = "network,disk,usb"
+ id = (known after apply)
+ ipconfig0 = "ip=192.168.2.131/24,gw=192.168.2.1"
+ memory = 1024
+ name = "storage-node-0"
+ numa = false
+ onboot = true
+ os_type = "cloud-init"
+ preprovision = true
+ scsihw = "virtio-scsi-pci"
+ sockets = 1
+ ssh_host = (known after apply)
+ ssh_port = (known after apply)
+ sshkeys = <<~EOT
ssh-rsa ...
EOT
+ target_node = "homelab"
+ vcpus = 0
+ vlan = -1
+ disk {
+ backup = false
+ cache = "none"
+ format = "raw"
+ id = 0
+ iothread = true
+ mbps = 0
+ mbps_rd = 0
+ mbps_rd_max = 0
+ mbps_wr = 0
+ mbps_wr_max = 0
+ replicate = false
+ size = "20"
+ storage = "data2"
+ storage_type = "lvm"
+ type = "scsi"
}
+ network {
+ bridge = "vmbr0"
+ firewall = false
+ id = 0
+ link_down = false
+ model = "virtio"
+ queues = -1
+ rate = -1
+ tag = -1
}
}
Plan: 4 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: planfile
To perform exactly these actions, run the following command to apply:
terraform apply "planfile"
As you can see - Terraform likes to install 4 new server. It also shows us the detailed configuration. The configuration can be read like a "diff" file:
- "+" means add
- "-" means remove
- "~" means replaced
The file "planfile" can be used for the next apply command:
terraform apply planfile
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
Depending on your hardware this needs some time. If everything runs fine you can see the output above. Terraform successfully created 4 new ressources, which you can use for install the K3s cluster. Because we don't like to install anything manually we will use Ansible for this job.
But this is another story I have to tell ;-)
Wish all of you a happy new year!