Ever want to try Infrastructure as Code (IaC) with Nutanix?

So I can imagine, if you find yourself here, you want to try Terraform but don’t really know where to start or what to do.

Not an issue. This post should be fairly short and straight forward, here’s is what we’re going to do:

  1. Install Terraform
  2. Create a directory for Terraform files
  3. Grab important information from your Nutanix cluster
  4. Create a Terraform config file
  5. Deploy the same config file
  6. Grab a beer

I’m going to assume that you already have Brew installed. If not, this is an easy way to install. (Following install instructions are on MacOS, if you use Linux go here.)

In your terminal, begin with the simple: brew tap hashicorp/tap
This will install the Hashicorp Repository for all Homebrew packages.
Now, follow this up with brew install hashicorp/tap/terraform
Lastly, don’t forget to verify if it installed correctly (hint: terraform version).

Ok! Now we’re cooking. From here, and for better organization, create a new directory where you’ll have all your Terraform files.

Now that we have Terraform installed, we need to initial our directory and designate what provider we’re going to use. In this case, we want to create a provider.tf file to tell Terraform to install the plugin necessary. In your terminal, run vi provider.tf
Note: You can use any .tf file for this

terraform {
  required_version = ">= 0.13"
  ## Define the required version of the provider
  required_providers {
    nutanix = {
      source  = "terraform-providers/nutanix"
      version = "~> 1.1"
    }
  }
}


Save, and run terraform init to initialize your directory for use.

Begin to create your main .tf file that will define your end state in Nutanix. I’m providing a template, however, you should check out the Nutanix resource on Terraform.io. You are welcome to download my template here.

Before we start editing, we need some important data from your cluster. SSH into any of your CVMs and locate this info:
1. ISO Image UUID
2. Storage Container UUID
3. Network UUID
4. Cluster UUID (Can access from Prism Settings under Cluster Details)

ISO Image UUID = acli image.list

Storage Container UUID = ncli container ls

Network UUID = acli net.list


With the UUID’s from the information above, we can now create our config file. You’ll need to start off with the provider and important information. Also, you can get the template from my Github.

provider "nutanix" {
 username = "<user_name>"  #Prism username and password
 password = "<password>"
 endpoint = "<IP_address>"  #prism virtual IP address as an endpoint
 insecure = true
}

Below that….

resource "nutanix_virtual_machine" "MyTestVM_TF" {
 name = "MyTestVM-TF"
 description = "Created with Terraform"
 provider = nutanix
 cluster_uuid = "0005ae5b-547e-4129-0000-0000000076a8" #Cluster UUID you pulled earlier
  num_vcpus_per_socket = 1
  num_sockets = 1
  memory_size_mib = 2048

  nic_list {
     # subnet_reference is saying, which VLAN/network do you want to attach here?
     subnet_uuid = "e6d59992-3323-4e39-8364-9a0603597c50" #Your network UUID
   }

  disk_list {
  # data_source_reference in the Nutanix API refers to where the source for
  # the disk device will come from. Could be a clone of a different VM or a
  # image like we're doing here
  # ssh into the CVM and run: acli image.list
  data_source_reference = {
   kind = "image"
   uuid = "1c88dd88-b9be-4961-9fd6-5c581d6e6d75" #Specific ISO for your deployment
    }

  device_properties {
    disk_address = {
   device_index = 0
   adapter_type = "IDE"
    }

    device_type = "DISK"
  }
    disk_size_mib   = 100000                # Pay attention to the size here
    disk_size_bytes = 104857600000

    # ssh into cvm and run: ncli container list
    storage_config {
      storage_container_reference {
        kind = "storage_container"
        uuid = "04c64ac7-c695-4350-b77d-61d5285c8fb0" # Conatiner you'd like to use
    }
   }

   }
}

output "ip_address" {
  value = nutanix_virtual_machine.MyTestVM_TF.nic_list_status.0.ip_endpoint_list[0]["ip"]
}

Take a few minutes and begin to input that data you collected from your CVMs. You’ll see we’ll need to put this all in a Resource of “nutanix_virtual_machine”. The “output” will be the IP address of your Terraform created VM.

So with your new file created, I want you to run the command terraform apply.

Go grab a beer, this may take a minute.

BOOM!

That’s it.

In this example, we installed Terraform, initialized a directory, and deployed a customized file for deploying one VM.

Are you frustrated when git continues to ask for your user credentials?

Good chance if you’re reading this, you might be dealing with some frustration while attempting to push your files to Github. I myself have two factor authentication for my browser login, and SSH keys added to my account for CLI access. But, something happened to me last night that brought endless frustration.

Can you see what I’m doing? How could this happen if I already have other repos being pushed successfully?

Well, this is fairly common for users of Github. It all stems from one thing…your cloning method! For this repo, I cloned via HTTPS, which is why I was being prompted for my username and password, even though I already had SSH keys enabled. So here is what you do:

Go the repo of your desire and now ONLY copy the SSH link.

git remote set-url origin git@github.com:username/repo.git

That’s it.

Note: I hope you’ve already run git init in the directory or git fetch.

Using Ansible to Automate the Updating of your VMs

Are you looking to run a simple update on all of your Linux VMs without having to SSH into each one? If that’s the case, you can use Ansible to perform types of interactions likes updates to a wide range of VMs on your Nutanix, VMware, or Cloud systems. But what is Ansible?

Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows.

Other software tools used for automation are Chef, Puppet, and Salt. Each of which is popular with different groups of Engineers. However, what sets Ansible apart is its ease of use and no need to install an agent on the host machine. By defining the state of an environment through a simple YAML file, you can run configurations on thousands of machines. I found a high level video of what Ansible is and does.

If you’re COMPLETELY new to Ansible, then obviously you’ll need to go through the installation process. Take a look at these instructions to get up and running.

This particular post is about running an update on your machines.
What you will need:
1. Ansible installed
2. Host IP address that you want to update
3. Terminal window
4. Some files as templates from my github.

First, we need to ensure that our Inventory or Hosts file is configured correctly. You’ll find the default hosts file in /etc/ansible/hosts in all Linux/MacOS distros.

Unless you specify in the CLI another Inventory file, Ansible will run the default hosts file for the pre-configured hosts you’ll be interacting with. So, let’s break the hosts file down.
#1 – This lists the variables that I want to use for my environment. Just like with coding in python, you can set a variable that can be called later.
‘{{ secret_password }}’ is telling this file to pull this password from an encrypted secret file in the same directory. The variable being secret_password.
#2 – Combination of both the name you want to give the VM, and the ip address associated with that VM.
#3 – The username I need to SSH into the VM to carry out my commands.
#4 – Generated SSH key that I am going to use for access into each VM.

You CAN specify an account password for SSH in this file, but it’ll be stored as plain text. Don’t do that. Use Ansible-vault to save your passwords in an encrypted file in the same directory as your playbooks.
Enter the command below in your terminal

$ ansible-vault create passwd.yml

From here, you’ll need to create a password to access this encrypted file. When a new vim screen appears, add the variable name and password you’ll use.

Now we need to ensure our playbook is in YAML format and is set to update all the packages.


For this to work, we need to define which hosts we want to run the update on. In this case, everything! Therefore, hosts is set to all. After defining the hosts, let’s get the tasks you want to run specified. Because we’re updating the entire cache, we want to set the apt to: update_cache=yes force_apt_get=yes cache_valid_time=3600.Upgrading the packages will require a new task with apt of:
upgrade=dist force_apt_get=yes

Let’s now ping our VMs to ensure they’re online before we run our playbook.

Success!

Now that everything is online and we have our files ready, let’s run our playbook.
$ ansible-playbook -i inventory --ask-vault-pass --extra-vars '@passwd.yml' playbook.yaml

Here I selected a different inventory file in my same directory, required that ansible use a secret from passwd.yml, and run my playbook.yaml which will be updating all my hosts.


And now all my VMs are running updates.

In this post you learned:
1. What is Ansible
2. How to run a playbook
3. How to update all your linux VMs in Nutanix

Playing around with Kubernetes and Nutanix Karbon

Nutanix’s Karbon automates the deployment and scaling of Kubernetes infrastructure on the Nutanix platform. If you’ve ever setup Kubernetes manually, you know from experience that the setup itself is time consuming and cumbersome. As much as Kubernetes makes orchestration of all containers easier, the setup process is entirely complex and can be prone to errors.

So if you want to play around with Karbon, but don’t quite know where to start, here you go!

Assuming you haven’t installed Kubernetes, follow this link to the installation process from kubernetes.io.

When you’ve verified that kubectl is installed, be sure to check out the video setup from Nutanix on creating a cluster.

I took some time to create a Github repo with instructions on how to deploy a dashboard gui, and a cluster using both a YAML file and regular commands.

https://github.com/mathurin186/NutanixKarbon

To deploy a fun dashboard in Karbon, open up your terminal application (MacOS/Linux) and run the command below:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

Followed by:

$ kubectl proxy

The commands enables the dashboard to begin to run for you to access and get a user friendly look at managing your cluster. From here, all you need to do is open a browser and past the following link into it:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

When you want to deploy you an application, you can do it two ways:
1. Through commands
2. Through YAML files
You can do both, which I’ll show you. However, if you want a quick deployment, then I’d recommend that you stick with deploying via YAML files. Scaling your cluster and deploying applications via command line is great to learn, but if you want something working and live then go with YAML.

$ kubectl create deployment --image nginx my-nginx
$ kubectl get pods
$ kubectl get deployment
$ kubectl scale deployment --replicas 2 my-nginx
$ kubectl get pods
$ kubectl expose deployment my-nginx --port=80  type=NodePort
$ kubectl get services

Deploying your cluster via YAML is just as simple. What you need to do is create a directory where you will house your YAML files. Then change to your new directory and clone the github repo posted earlier.

$ mkdir Karbon
$ cd Karbon/
$ git clone https://github.com/mathurin186/NutanixKarbon.git

You should be seeing this:

The last two commands will deploy the YAML files to create a Kubernetes cluster on Nutanix Karbon. Running “get pods” verifies that the deployment was a success.

$ kubectl apply -k .
$ kubectl get pods

From this post, you should know how to deploy a Karbon cluster, and provision applications on demand. This will give you the necessary experience to see how easy it is to run Kubernetes in Nutanix.

Enjoy!