In one of my latest posts I gave you an explanation about the different usage scenarios for Terraform and Ansible. Personally, I'm using Terraform for my infrastructure tasks and Ansible on top of it. For example, installing a K3s cluster involves two steps. In a first step, I'm installing the necessary server landscape with Terraform. Afterwards, I'm using Ansible playbooks to provision the software components on my infrastructure. Putting this together, I get an easy to maintain and optimized build system for integration in a CI/CD Pipeline. In this post I will show you how it is possible to combine Terraform and Ansible.

Prerequisites

First of all we create a directory for our project configuration. In this example, we're using "k3s-server" because we like to provision K3s on it. As subfolders we create one directory for Terraform and one directory for Ansible. The configuration parts for Terraform have already been discussed in this post. You already should have a Terraform cloud account and the remote state configured as described in this post. If you're in a hurry, you can clone the Github examples which already includes the Terraform configuration and the Ansible playbooks .

k3s-server
    terraform
    	variables.tf
        provider.tf
        main.tf
        backend.tf
    ansible
    	....

Create the bridge between Terraform and Ansible

Ansible is working with inventories for provisioning infrastructure. To get the infrastructure components from Terraform into Ansible we have two possibilities:

  • Manually, because we know the information is static (static IPs, static hostnames, count of servers ....)
  • Dynamic, because we know it is necessary to scale, use other hostname-conventions, have the need for another ip range

Again, for development reasons there is no problem to do it manually, but in big or production environments there is no other way then the dynamic one. Thus, I will show you the dynamic part.

For dynamic inventory creation, we need two components. Both have been created by Nicholas Bering. The first component is the Terraform Ansible provider and the second one is a script which works as a middleware between the Terraform remote state file api  and the dynamic inventory hook in Ansible. If you like you can read more about the design principles in Nicholas Berings article.

Install and configure the Terraform Ansible provider

First, we have to install the Terraform Ansible provider using Go

$ go get github.com/nbering/terraform-provider-ansible
$ cd $GOPATH/src/github.com/nbering/terraform-provider-ansible
$ make

# Create the Terraform home directory if it does not exist.
$ mkdir ~/.terraform.d/plugins

$ cp $GOPATH/bin/terraform-provider-ansible $HOME/.terraform.d/plugins/

You can also download a precompiled release for your operating system from here.

Then we have to adapt the existing provider.tf configuration file

provider "ansible" {
  version = "1.0.3"
}
provider.tf

Additionally, we have to set the minimum required Terraform version in a new file "versions.tf"

terraform {
  required_version = ">= 0.12"
}
versions.tf

Last but not least, the inventory for Ansible has to be added to the main.cf file

# Ansible inventory
resource "ansible_host" "k3s_master" {
  count              = 1
  inventory_hostname = "kubernetes-master-${count.index}"
  vars = {
    ansible_host = "192.168.2.1${count.index + 1}"
  }
  groups              = ["k3s_master"]
}

resource "ansible_host" "k3s_agent" {
  count              = 2
  inventory_hostname = "kubernetes-node-${count.index}"
  vars = {
    ansible_host = "192.168.2.1${count.index + 1}"
  }
  groups              = ["k3s_agents"]
}

resource "ansible_host" "storage" {
  count              = 1
  inventory_hostname = "storage-node-${count.index}"
  vars = {
    ansible_host = "192.168.2.1${count.index + 1}"
  }
  groups              = ["storage"]
}
main.tf

Install the dynamic inventory script

The install script is part of my Ansible playbook to eliminiate dependencies later on in the CI/CD pipeline. The script is downloadable under:

https://github.com/nbering/terraform-inventory/blob/master/terraform.py

Configure the Ansible K3s playbook

For my own purposes I've already prepared a Ansible project for K3s. If you like you can change the actual component versions (K3s, Traefik, Certmanager), but overall there is no need to change it for our testing example:

ssh_user: ubuntu
auto_up_disable: false
core_update_level: true
gather_facts: true
ansible_python_interpreter: /usr/bin/python3
k3s_config_path: /etc/rancher/k3s/k3s.yaml
run_in_container: false

k3s_master_type: single
k3s_version: v1.17.5+k3s1
k3s_cluster_secret: 1234567890123456789012345678901234567890123
keepalived_ip_master: "{{ groups['k3s_master'] | map('extract',hostvars,'ansible_host') | first }}"
keepalived_ip_agent: "{{ groups['k3s_agent'] | map('extract',hostvars,'ansible_host') | first }}"
keepalived_router_id_master: 110
keepalived_router_id_agent: 210
nfs_server: "{{ groups['storage'] | map('extract',hostvars,'ansible_host') | first }}"

traefik_version: 2.2.1

certmanager_version: 0.15.1
certmanager_email: yourmail@yourdomain.xxx # Necessary for letsencrypt certificates
ansible/group_vars/all

Finally, lets play with Terraform and Ansible together

At the beginning, we've to run Terraform to prepare our infrastructure for the Ansible deployment. This can be achieved with the four Terraform commands:

$ terraform init
$ terraform validate
$ terraform plan
$ terraform apply

If everything went fine, you should have a functioning infrastructure for your Ansible run. Remember, we're using a dynamic inventory created by a Terraform API call, which is achieved through the terraform script. Let's see if we get the inventory back from Terraform Cloud:

ANSIBLE_TF_WS_NAME=k3s-inventory ANSIBLE_TF_DIR="./terraform" ansible/terraform.py 

This should print out a JSON list compatible with the Ansible dynamic inventory.                                To run the Ansible playbook we have to use the following command :

$ ANSIBLE_TF_DIR="./terraform" ansible-playbook -i ansible/terraform.py ansible/site.yml

After a long but successful run, you should see something like this:

kubernetes-master-0 : ok=43   changed=15   unreachable=0    failed=0    skipped=7    rescued=0    ignored=0   
kubernetes-node-0  : ok=23   changed=1    unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   
kubernetes-node-1  : ok=23   changed=1    unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   
kubernetes-node-2  : ok=23   changed=1    unreachable=0    failed=0    skipped=5    rescued=0    ignored=0   
storage-node-0     : ok=20   changed=1    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0  

Now you can login to the master node an run the command:

$ sudo kubectl get nodes

NAME                  STATUS   ROLES    AGE   VERSION
kubernetes-node-2     Ready    <none>   22d   v1.17.5+k3s1
kubernetes-master-0   Ready    master   22d   v1.17.5+k3s1
kubernetes-node-1     Ready    <none>   22d   v1.17.5+k3s1
kubernetes-node-0     Ready    <none>   22d   v1.17.5+k3s1

Conclusion: We combined two of the best automation solutions in a centralized GIT repository. With only five commands we've installed the infrastructure and the necessary software components of a whole Kubernetes cluster. In the future you can run these commands automatically using a CI/CD solution like Gitlab. But this is another story I have to tell.

For now, try to scale up your cluster using the Terraform configuration. Run the commands again and see what happen.