Want to Add Automation to Terraform in AWS?

I’m studying for the Certified Kubernetes Administrator which requires me to setup multiple VMs with specific software and configurations.

Going through this process by hand is time consuming, and definitely error prone due to multiple pieces that I need to configure. So, I figured, why not automate with a Terraform file?

To make my life easier, I went ahead and created a bash script to automate the install process of kubernetes and other software components that was required. But, here is the dilemma that I ran into: how do I copy this file on my workstation to the EC2 instances that I provision? (Yes, I can do scp, but I’m lazy and don’t want to do that).

I’ll save you the time, you’ll need the provisioner component to the resource you’re defining. Happily, in my rabbit hole findings, I found you’re able to use this with other Providers, not just AWS. But in this case, we’re going to stick with AWS.

Here is the full resource defined for context.

resource "aws_instance" "prod_web" {
  count = 2

  ami           = "ami-12345"
  instance_type = "t2.medium"
  key_name      = "secretkey"

  vpc_security_group_ids = [
    aws_security_group.prod_web.id
  ]
  
  provisioner "file" {
    source      = "kubernetes-fresh-install-Ubuntu.sh"
    destination = "/home/ubuntu/kubernetes-fresh-install-Ubuntu.sh"

    connection {
      user      = "ubuntu"
      type      = "ssh"
      private_key  = "${file("secretkey.pem")}"
      host      = "${self.public_ip}"
    }
  }

  tags = {
    "linuxfoundation" : "traffic"
  }
}

Ok, so we need to break this down for the main problem. To add the file to a specific directory, you’ll need to add this in your resource brackets.

  provisioner "file" {
    source      = "kubernetes-fresh-install-Ubuntu.sh"
    destination = "/home/ubuntu/kubernetes-fresh-install-Ubuntu.sh"
 

Pretty straight forward. Use provisioner and “file” because you’re specifying that we are copying a file. Specify the file you’d like to copy with “source“, then the destination in the newly created EC2 instance(s).

Next, inside the provisioner brackets, you will need to create a connection. This is because Terraform will need to SSH into the new instances in order to copy the files.

    connection {
      user      = "ubuntu"
      type      = "ssh"
      private_key  = "${file("secretkey.pem")}"
      host      = "${self.public_ip}"
    }

Again, pretty straight forward. Define your user with the username needed to SSH into the instances, and your type is SSH. Here is where I got hung up. I had a key generated from AWS that I wanted to use, but had to dig deeper to find that you will absolutely need to use the command: "${file("path/of/file.pem")}" in order to securely SSH in. The secret key could be in your executing directory, or the ~/.ssh/ folder. After which, you have to specify the host you’re going to execute this on. because I’m creating more than one EC2 instance and I don’t know the public IP address, I’ll use “${self.public_ip}” for all the IP addresses that get assigned.

That’s it! By using provisioner and connection, you’re able to copy a file from your workstation into the newly created EC2 instances in AWS.

Check out this full example from a different user on GitHub.