Want to Add Automation to Terraform in AWS?

I’m studying for the Certified Kubernetes Administrator which requires me to setup multiple VMs with specific software and configurations.

Going through this process by hand is time consuming, and definitely error prone due to multiple pieces that I need to configure. So, I figured, why not automate with a Terraform file?

To make my life easier, I went ahead and created a bash script to automate the install process of kubernetes and other software components that was required. But, here is the dilemma that I ran into: how do I copy this file on my workstation to the EC2 instances that I provision? (Yes, I can do scp, but I’m lazy and don’t want to do that).

I’ll save you the time, you’ll need the provisioner component to the resource you’re defining. Happily, in my rabbit hole findings, I found you’re able to use this with other Providers, not just AWS. But in this case, we’re going to stick with AWS.

Here is the full resource defined for context.

resource "aws_instance" "prod_web" {
  count = 2

  ami           = "ami-12345"
  instance_type = "t2.medium"
  key_name      = "secretkey"

  vpc_security_group_ids = [
    aws_security_group.prod_web.id
  ]
  
  provisioner "file" {
    source      = "kubernetes-fresh-install-Ubuntu.sh"
    destination = "/home/ubuntu/kubernetes-fresh-install-Ubuntu.sh"

    connection {
      user      = "ubuntu"
      type      = "ssh"
      private_key  = "${file("secretkey.pem")}"
      host      = "${self.public_ip}"
    }
  }

  tags = {
    "linuxfoundation" : "traffic"
  }
}

Ok, so we need to break this down for the main problem. To add the file to a specific directory, you’ll need to add this in your resource brackets.

  provisioner "file" {
    source      = "kubernetes-fresh-install-Ubuntu.sh"
    destination = "/home/ubuntu/kubernetes-fresh-install-Ubuntu.sh"
 

Pretty straight forward. Use provisioner and “file” because you’re specifying that we are copying a file. Specify the file you’d like to copy with “source“, then the destination in the newly created EC2 instance(s).

Next, inside the provisioner brackets, you will need to create a connection. This is because Terraform will need to SSH into the new instances in order to copy the files.

    connection {
      user      = "ubuntu"
      type      = "ssh"
      private_key  = "${file("secretkey.pem")}"
      host      = "${self.public_ip}"
    }

Again, pretty straight forward. Define your user with the username needed to SSH into the instances, and your type is SSH. Here is where I got hung up. I had a key generated from AWS that I wanted to use, but had to dig deeper to find that you will absolutely need to use the command: "${file("path/of/file.pem")}" in order to securely SSH in. The secret key could be in your executing directory, or the ~/.ssh/ folder. After which, you have to specify the host you’re going to execute this on. because I’m creating more than one EC2 instance and I don’t know the public IP address, I’ll use “${self.public_ip}” for all the IP addresses that get assigned.

That’s it! By using provisioner and connection, you’re able to copy a file from your workstation into the newly created EC2 instances in AWS.

Check out this full example from a different user on GitHub.

Ever want to try Infrastructure as Code (IaC) with Nutanix?

So I can imagine, if you find yourself here, you want to try Terraform but don’t really know where to start or what to do.

Not an issue. This post should be fairly short and straight forward, here’s is what we’re going to do:

  1. Install Terraform
  2. Create a directory for Terraform files
  3. Grab important information from your Nutanix cluster
  4. Create a Terraform config file
  5. Deploy the same config file
  6. Grab a beer

I’m going to assume that you already have Brew installed. If not, this is an easy way to install. (Following install instructions are on MacOS, if you use Linux go here.)

In your terminal, begin with the simple: brew tap hashicorp/tap
This will install the Hashicorp Repository for all Homebrew packages.
Now, follow this up with brew install hashicorp/tap/terraform
Lastly, don’t forget to verify if it installed correctly (hint: terraform version).

Ok! Now we’re cooking. From here, and for better organization, create a new directory where you’ll have all your Terraform files.

Now that we have Terraform installed, we need to initial our directory and designate what provider we’re going to use. In this case, we want to create a provider.tf file to tell Terraform to install the plugin necessary. In your terminal, run vi provider.tf
Note: You can use any .tf file for this

terraform {
  required_version = ">= 0.13"
  ## Define the required version of the provider
  required_providers {
    nutanix = {
      source  = "terraform-providers/nutanix"
      version = "~> 1.1"
    }
  }
}


Save, and run terraform init to initialize your directory for use.

Begin to create your main .tf file that will define your end state in Nutanix. I’m providing a template, however, you should check out the Nutanix resource on Terraform.io. You are welcome to download my template here.

Before we start editing, we need some important data from your cluster. SSH into any of your CVMs and locate this info:
1. ISO Image UUID
2. Storage Container UUID
3. Network UUID
4. Cluster UUID (Can access from Prism Settings under Cluster Details)

ISO Image UUID = acli image.list

Storage Container UUID = ncli container ls

Network UUID = acli net.list


With the UUID’s from the information above, we can now create our config file. You’ll need to start off with the provider and important information. Also, you can get the template from my Github.

provider "nutanix" {
 username = "<user_name>"  #Prism username and password
 password = "<password>"
 endpoint = "<IP_address>"  #prism virtual IP address as an endpoint
 insecure = true
}

Below that….

resource "nutanix_virtual_machine" "MyTestVM_TF" {
 name = "MyTestVM-TF"
 description = "Created with Terraform"
 provider = nutanix
 cluster_uuid = "0005ae5b-547e-4129-0000-0000000076a8" #Cluster UUID you pulled earlier
  num_vcpus_per_socket = 1
  num_sockets = 1
  memory_size_mib = 2048

  nic_list {
     # subnet_reference is saying, which VLAN/network do you want to attach here?
     subnet_uuid = "e6d59992-3323-4e39-8364-9a0603597c50" #Your network UUID
   }

  disk_list {
  # data_source_reference in the Nutanix API refers to where the source for
  # the disk device will come from. Could be a clone of a different VM or a
  # image like we're doing here
  # ssh into the CVM and run: acli image.list
  data_source_reference = {
   kind = "image"
   uuid = "1c88dd88-b9be-4961-9fd6-5c581d6e6d75" #Specific ISO for your deployment
    }

  device_properties {
    disk_address = {
   device_index = 0
   adapter_type = "IDE"
    }

    device_type = "DISK"
  }
    disk_size_mib   = 100000                # Pay attention to the size here
    disk_size_bytes = 104857600000

    # ssh into cvm and run: ncli container list
    storage_config {
      storage_container_reference {
        kind = "storage_container"
        uuid = "04c64ac7-c695-4350-b77d-61d5285c8fb0" # Conatiner you'd like to use
    }
   }

   }
}

output "ip_address" {
  value = nutanix_virtual_machine.MyTestVM_TF.nic_list_status.0.ip_endpoint_list[0]["ip"]
}

Take a few minutes and begin to input that data you collected from your CVMs. You’ll see we’ll need to put this all in a Resource of “nutanix_virtual_machine”. The “output” will be the IP address of your Terraform created VM.

So with your new file created, I want you to run the command terraform apply.

Go grab a beer, this may take a minute.

BOOM!

That’s it.

In this example, we installed Terraform, initialized a directory, and deployed a customized file for deploying one VM.