Connecting your Kubernetes Application to Cloud SQL

If you couldn’t tell, I’m a bit of a kubernetes nerd. Started learning in 2018, and always had a passion for tinkering with the tool. But one thing I’ve always shied away from are Databases.

So, after some thought, realized that I need to know DB’s like the back of my hand. In my next post, we’ll talk about Databases and use cases behind them, but for this, let’s get your hands dirty. In this post, we’re going to deploy a simple Kubernetes application, then connect to a cloud run MySQL instance in GCP’s Cloud SQL. We’ll learn the “why” of this architecture which is:

  • Protect your database from unauthorized access by using an unprivileged service account on your GKE nodes.
  • Put privileged service account credentials into a container running on GKE.
  • Use the Cloud SQL Proxy to offload the work of connecting to your Cloud SQL instance and reduce your applications knowledge of your infrastructure.

What you’ll need for this:
1. GCP account with ability to create Google Kubernetes Engine (GKE) clusters.
2. Ability to clone from Github

Cloud SQL Proxy
By using the Cloud SQL Proxy, you delegate connection management to Google. This frees your application from knowing connection details and streamlines secret handling. The Cloud SQL Proxy, conveniently provided as a Docker container by Google, can run alongside your application within the same GKE pod for a seamless setup.

Architecture

The application and its sidecar container are deployed in a single Kubernetes (k8s) pod running on the only node in the GKE cluster. The application communicates with the Cloud SQL instance via the Cloud SQL Proxy process listening on localhost.

The k8s manifest builds a single-replica Deployment object with two containers, pgAdmin and Cloud SQL Proxy. There are two secrets installed into the GKE cluster: the Cloud SQL instance connection information and a service account key credentials file, both used by the Cloud SQL Proxy containers Cloud SQL API calls.

The application doesn’t have to know anything about how to connect to Cloud SQL, nor does it have to have any exposure to its API. The Cloud SQL Proxy process takes care of that for the application. It’s important to note that the Cloud SQL Proxy container is running as a ‘sidecar’ container in the pod.

Getting Started
Log in to your GCP console and select your Project. From there, you’ll need to activate a Cloud Shell which you can locate via the top right part of the console.

When you’re activated, this is what you should see.

Keep in mind that this can be done from the terminal on your work/personal computer as well. But I want to make this fairly simple so we can go with the provided console.

When you open a console in GCP, your credentials and PROJECT_ID will be connected automatically so you won’t need to do anything additional.

Next, you will need to download the demo resources. Lucky for us, Engineers at GCP went ahead and created some code for us to deploy and see how this works in the cloud. Run the following code for this project.

gsutil cp gs://spls/gsp449/gke-cloud-sql-postgres-demo.tar.gz .
tar -xzvf gke-cloud-sql-postgres-demo.tar.gz

From here, you’ll need to execute code from the directory.

cd gke-cloud-sql-postgres-demo

Now, the fun begins.

DEPLOYMENT

This particular deployment is automated, but you’ll need top define a few parameters in order:

  • A username for your Cloud SQL instance – (You create this, any name works)
  • A username for the pgAdmin console – (Also, you create this)
  • USER_PASSWORD – the password to login to the Postgres instance
  • PG_ADMIN_CONSOLE_PASSWORD – the password to login to the pgAdmin UI

Let’s start by saving your account into a variable that we’ll need for later

PG_EMAIL=$(gcloud config get-value account)

Run the command below to deploy the script and create the 2 usernames. Keep in mind that you’ll need to create a password for both.
./create.sh dbadmin $PG_EMAIL

While this is deploying, you should understand the different scripts being run. Note: this may take up to 10 min.

  • enable_apis.sh – enables the GKE API and Cloud SQL Admin API.
  • postgres_instance.sh – creates the Cloud SQL instance and additional Postgres user. Note that gcloud will timeout when waiting for the creation of a Cloud SQL instance so the script manually polls for its completion instead.
  • service_account.sh – creates the service account for the Cloud SQL Proxy container and creates the credentials file.
  • cluster.sh – Creates the GKE cluster.
  • configs_and_secrets.sh – creates the GKE secrets and configMap containing credentials and connection string for the Cloud SQL instance.
  • pgadmin_deployment.sh – creates the pgAdmin4 pod.

Next, let’s use the load balancer to expose the pod in order to connect to the instance, then delete the services when finished to avoid unauthorized access.

  1. Run the following to get the Pod ID:
POD_ID=$(kubectl --namespace default get pods -o name | cut -d '/' -f 2)
  1. Expose the pod via load balancer:
kubectl expose pod $POD_ID --port=80 --type=LoadBalancer
  1. Get the service IP address:
kubectl get svc

Output:

Note: Keep in mind that sometimes waiting for an external IP to be assigned will take a couple min. Be patient.


Next, we need to access the SQL instance. On the lefthand menu, navigate to SQL. From there, click in Connections and then Networking.

With Public IP box checked, click Add a Network.

Name the network and give it public access:
0.0.0.0/0

Click Done, then click Save.

Open a new browser tab using the pgAdmin IP:

http://<SVC_IP>

Sign in to the pgAdmin UI with the following:

  • <PGADMIN_USERNAME> your GCP email in the “Email Address” field
  • <PG_ADMIN_CONSOLE_PASSWORD> that you defined earlier

Return to the Cloud console, and the SQL page. Click on the Overview tab.

Copy the Public IP address.

In the pgAdmin console, from the left pane click Servers, then click Add New Server.

On the General tab, give your server a name, then click on the Connection tab.

Use the <DATABASE_USER_NAME>(dbadmin) and <USER_PASSWORD> you created earlier to connect to 127.0.0.1:5432:

Next, create a new connection to your already spun up database:

  • Host name: paste the public IP address you copied
  • Username: <DATABASE_USER_NAME>(dbadmin)
  • Password: <USER_PASSWORD> you created

Click Save.

Congrats! At this point you deployed a GKE cluster with an application that connects to your Cloud SQL instance via a proxy.

After Project Thoughts

This project was surprisingly easy and fairly enjoyable to learn. Here I was able to learn how to decouple my database and instead of managing myself, I now can leverage GCP’s hosted service which will save me a lot of time and energy. This helps to broaden my imagination when it comes to connectability with workloads and Cloud hosted services.

Give it a go!

Hardening Your Kubernetes Cluster with Kube-Bench

For this post, we are going to learn about a compliance tool Kube-Bench and how to run Kubernetes CIS benchmarks against a cluster using Kube-Bench.

So I recently finished my CKS and learned a hell of a lot about securing a Kubernetes cluster. While there are plenty of great tools, why not compliance framework as a starting step? Best way to reduce the attack surface of your cluster?

CIS Kubernetes Benchmark!

Linked here is some information directly from the Center for Internet Security.
CIS benchmarks covers security guidelines and recommendations for:
* Control Plane Components
* Worker Nodes
* Policies: RBAC services, service accounts, etc.

Now, What IS Kube Bench?

Kube-Bench is a security tool that runs under an Apache 2.0 license, used to verify whether a Kubernetes deployment is secure by running CIS Kubernetes Benchmark checks based on the Center for Internet Security documentation. CIS provides more than one hundred benchmarks across multiple vendor product families. This benchmark tool was originally designed by Aqua Security company as a free tool for Kubernetes users.

I personally like this because the finished report will show me not just what I’m passing for compliance, but what I am failing at AND how to fix them. As an example, see below.

After you run Kube-Bench, you’ll be presented with a report broken down into four sections: Master Node, ETCD Node, Worker Node, and Policies.

There are multiple ways for you to run the benchmark against your environments, but it’ll come down to what makes more sense. If you’re running on a self hosted cluster, then binary it is! However, the more simple method for managed services, AKA EKS, GKE, AKS, etc., would be as a Kubernetes Job. Keep in mind that this is the simplest method as you don’t have access to the controlplane or root access to the worker nodes.

To modify the file before applying run the following command in your terminal window:

curl https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml > job.yaml

Or if you prefer to just apply the file in a “JESUS TAKE THE WHEEL” method:

kubectl apply -f https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml

After you have applied the job, you will need to read the pod specific logs to get the output like what we have seen above.

So as you can see, there are multiple ways to run the Kube-Bench tool against your Kubernetes environments. Each way has it’s own pros and cons, so it’ll be up to you to decide on what works best for your workloads.

But in the article, we went over Kube-Bench, CIS Kubernetes, why you want to run this against your Kubernetes clusters, and the different methods of deployment.

Passing the Certified Kubernetes Security Specialist (CKS) Exam

Hey. Still around? It’s been a minute since our last conversation, felt like updating you on my life. More like, I just passed the hardest exam I’ve ever had taken in my life. Seriously, this one made me question my life’s choices. So, what was the exam? What did it cover? How did I prepare? AND, what are my recommendations for you to pass?

Quick note about the exam, you need to have passed the Certified Kubernetes Administrator exam first. After you pass the CKS, you extend how long your CKA is valid for. Below is a screenshot of the areas. covered in the exam.

Admin details of the exam:
time: 2 hours long
attempts: You get 2x for each purchase
remote: Yes, you have to take this from a remote location. You will need to take this in a room by yourself.
questions: 16 in total weight of questions: the questions vary from 4% to 6% to 11%, so spend your time wisely.
documentation available: Yes, you’re able to access only the below listed sources.
results: You will get the results 24 hours after completion of the exam.

  1. Kubernetes Documentations
  2. Kubernetes Blog
  3. Trivy
  4. falco
  5. apparmor

My Advice:
Ensure that you have a mouse for faster COPY/PASTE actions during the exam. Don’t count on the key commands as they are different with the Virtual Machine given to you.

You will NOT be able to have a browser with preorganized bookmarks. There will be a virtual desktop that you can access to dive into the kubernetes documentation page.

Always, and I mean ALWAYS, ensure that when you’re working with the Kube API Yaml, you copy commands and file paths from the instructions to the file. Don’t think that in your tired state you’ll be able to remember every key.

Speed is the key, there is no time to get lost in a question. Look at each question and mark the questions you think will take longer than two min to fix.

When you move to a new question, ensure that you IMMEDIATELY switch contexts. From there, read the entire question as sometimes they will provide a pre-filled out template at the very bottom of the page.

When editing the KubeAPI config, ensure that you make a copy first. This will ensure that you’re prepared incase the worst happens.

Understand that because time is NOT on your side, you will need to cut corners for deploying/editing cluster info. Because of this, I encourage you to learn more Imperative Commands which you can reference the official kubernetes guide here.

Study Material
Killercoda
Killer.sh
KodeKloud
My Study Guide

KillerCoda gives you multiple scenarios where you can play around with kubectl in a sandbox environment.
Killer.sh provides you a practice test where you can see how you fair with 2 hours to complete the exam.
KodeKloud is what I used for this and my CKA. What their team provides are videos of lecture material about the subjects that are broken down into 10 min clips followed by labs to learn with your hands.
My Study Guide is just that. More details about the exam broken down by the subjects I experienced.

Did you end up passing the CKS? What was your experience like?

Passing the Certified Kubernetes Administrator Exam!

It finally happened. Three years of playing around with Kubernetes and I finally decided to take the plunge to earn my CKA. It was NOT easy.

So, if you find yourself on the page and wonder what you can do to earn this glorious cert, then I’ll give you my pointers. I’d like to add that there are TONS of articles and blog posts describing the best ways to earn or master the exam. Hopefully my advice can give you some pointers on what to expect and how to prepare yourself.

If you’re a person who hates exams, wasn’t good with exams, is stressed about passing this. Hey, take solace. If there are tons of people saying they passed, you can too.

STARTING POINT

Install Kubernetes on your desktop. Don’t think, just do it. You can’t pass this exam by just studying the information. You’ll need to put that in practice.

Assuming you’ve installed everything and Minikube is running, now you get to chose what study path works best for you.

I’d suggest spending some extra cash on Certified Kubernetes Administrator (CKA) with Practice Tests on Udemy by Mumshad Mannambeth. His course had lots of videos on the various topics you’ll be tested on, but they’re short and sweet. Right to the point. After one or two videos, you will be re-directed to KodeKloud where a virtual environment is created for you to solve the problems in the exam. I’ll give you a cool hint about the practice exams, they really are what will be on the live exam. I cannot recommend Mumshad’s course, it may be long, but you will gain a better understanding of Kubernetes architecture.

If you’re not a fan of the video series and would rather just read, you can do the CKA course on LinuxFoundation.org. I’ll warn you right off the bat, the course is $375, $575 for the course and exam. And while those prices are steep, the content is well put together. At the end of each section, you have labs. While you won’t get an environment like KodeKloud, you can download the lab instructions in PDF for future use. I still have some content from 2018. What I liked about LinuxFoundation labs step action guide is that it really gets you into the terminal. You absolutely build everything. While KodeKloud will have environments pre-setup, LinuxFoundation will need for you to go through the motions on Terminal.

After you finish the course, I highly suggest going through these practical exercises on GitHub that will help you through various tasks. I idea is about repetition, the more you do it, the more you’ll be in the mindset for the exam and beyond.

Done with Basics?

At this point, you’ve done the courses and feel confident to take the exam right? Great, but really hold off just yet. Take some time to get familiar with the Kubernetes official documentation. Great thing about this exam is that you are allowed one extra tab open on your browser, and that will only be for Kubernetes Documentation. Let’s be honest, Kubernetes is too hard to remember all the Yaml formats. Really think about what might be your weak points on the exam and learn how to search for it.

The exam is over 2 hours, but time WILL fly. So best thing for you to do is know how to use shortcuts. Can you do everything over Yaml? Absolutely, but should you use it, not all the time. This page actually helped me learn how to run/deploy anything I needed by just running commands instead of copy and paste for a Yaml. If you can run these commands instead of researching a yaml format, you will save time. And, again, time is not your friend in this exam.

CONCLUSION

The CKA is a hard exam, but you will pass it. I believe you can do it. Invest the time to get this knocked out.

The Art of Defense – Basic Nutanix Survival in Today’s Threat Landscape

Want to learn how to harden your defenses against external threats but not quite sure where to begin?

In this article, we will go over the following to help you stay proactive with securing your Nutanix environment.

Hardening Your CVM

  • Change Default Password
  • Learn about STIG, SCMA, and how to configure them
  • Advanced Intrusion Detection Environment (AIDE)
  • Cluster Lockdown

Change Default Password
First thing that should be done when you first get your system, change the default password! Many systems get compromised due to a weak password or no change to the default. Assuming you’re not enabling Cluster Lockdown, perform the following steps:
1. SSH into CVM
2. Change the Nutanix user account password
nutanxi@cvm$ passwd
3. Change the Nutanix root account password
nutanix@cvm$ sudo passwd root

STIG, SCMA, and how to configure them
If you’re scratching your head about these terms, don’t worry. I didn’t know about them until I joined Nutanix. Let’s start with the Security Technical Implementation Guide (STIG).

Now, what is a STIG?
STIGs are “Powerful automation and self-healing security models help maintain continuous security in enterprise cloud environments with efficiency and ease. Nutanix has created custom STIGs that are based on the guidelines outlined by Defense Information Security Agency (DISA) to keep the enterprise cloud platform within compliance and reduce attack surfaces.” In the case of Linux, the STIGs are commands that find/fix proven vulnerabilities in the code.

Already installed and updated on every Nutanix system is a series of STIGs, both taken from the National Institute of Standards and Technology (NIST). Check out the list of official and customized list of our STIGs.

So now that we know what STIGs are, what is SCMA?
Security Configuration Management Automation (SCMA) checks over 800 security entities in the Nutanix STIGs that cover both storage and built in virtualization. Nutanix leverages SaltStack and SCMA to self-heal any deviation from the security baseline configuration of the operating system and hypervisor to remain in compliance. If any component is found as non-compliant then the component is set back to the supported security settings without any intervention.

  • SCMA monitors the deployment periodically for any unknown or unauthorized changes to configurations, and can self-heal from any deviation to remain in compliance.
  • For example, automatically protecting permissions on log files is just one of several vulnerabilities that Nutanix checks to ensure their safety.

SaltStack Enterprise, built on Salt open source platform, provides system management software for the software-defined data center with the delivery of event-driven automation for natively integrated configuration management, infrastructure security and compliance, and any cloud or container control.

Hardening your CVMs
SSH into a CVM, and run the following:
nutanix@cvm$ ncli cluster get-hypervisor-security-config
….
Enable Aide : true
Enable Core : false
Enable High Strength P… : true
Enable Banner : false
Schedule : HOURLY

You can customize these categories with preference, but I want to focus on three: AIDE, High Strength Password, and Schedule.

AIDE stands for Advanced Intrusion Detection Environment and is the most popular tools for monitoring changes to Linux-based operating systems. It is used to protect your system against malware, viruses and detect unauthorized activities. It works by creating a database of the file system and checks this database against the system to ensure file integrity and detect system intrusions. AIDE helps you to shorten the investigation time during the incident response by focusing in on the files that have been changed. Basic info can be found about AIDE here.
To enable AIDE, run this command in the CVM terminal.
nutanix@cvm$ ncli cluster edit-hypervisor-security-params enable-aide=true

To enable the high-strength password policies (minlen=15, difok=8, remember=24) for your CVM:
ncli cluster edit-hypervisor-security-params enable-high-strength-password=true

Changing the default schedule of running the SCMA. The schedule can be hourly, daily, weekly, and monthly
ncli cluster edit-hypervisor-security-params schedule=hourly


Cluster Lock Down
Lastly, we can enable Cluster Lock Down on your Nutanix system. This will disable password SSH access to the CVMs and ensure that the system denies access to those attempting to gain access.

  • Nutanix recommends that access including SSH directly to CVM and hypervisor should be restricted to as few entities as possible.
  • In high security settings, Cluster lockdown can be very appropriate and should be implemented
  • Cluster Lockdown does not effect any cluster communication between its components. Cluster will function as normal.

You can enable this feature via Prism Settings, but follow this guide to generate keys needed for this operation.

If you’ve reached this far, then congrats! You have learned about some advanced features you can enable or customize to greater strengthen your defensive stance against exterior threats. 

Customizing Your Linux/Mac OSX Terminal (Sorry Windows)

In this post, we’re going to learn how to customize your terminal. If you’re getting into any software and/or system development, you need to get comfortable with the command line. Because of that, why not explore how to make this tool more personalized?

For this reason, I want to explore customization of your terminal with ZSH.

So what is ZSH? Taken from the ZSH website,

Oh My Zsh is an open source, community-driven framework for managing your Zsh configuration. Sounds boring. Let’s try again. Oh My Zsh will not make you a 10x developer…but you may feel like one!

Opening up the terminal in any Linux or MacOSX distro and you’ll get the plain look of Bash:


It’s exactly as you suspect….boring. With ZSH, you’re able to load the theme of your choice in the ~/.zshrc file located in your home directory. Here are a few examples of what your terminal COULD look it.

As one example, not the fully colored in bar annotating the directory currently in is the master for Github.
Second example, with the same content but of course, different flavor.

I already have this installed on my machine, so for this example, I spun up a Vagrant Ubuntu server to go through the process. I’ll show the debian way of installation, but you can follow this link to install on CentOS, MacOSX, and of course……Windows.
Debian-based Linux Systems

sudo apt update
sudo apt upgrade
sudo apt install zsh

When that is finished installing, follow up with this bad boy:

sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

The output from the above shell command should give you this.

Make sure to say Yes to changing your default shell to ZSH. Sometimes, and I mean sometimes, terminal will default back to Bash. If that were to happen, or you just don’t want to run ZSH as your default, you can easily type:

exec zsh 
or
exec bash

These terminal commands will change the terminal you’re working in.

When ZSH installs, it will default to the theme “robbyrussell”. I like it personally, but who wants to stick with default? There are waaaayyyyyy too many themes for you to choose from. Just check out this link to view the different terminal themes.

Simply run the command below, then add in the quotation marks what theme you’d like to use from the above link.

vim ~/.zshrc 
Note: For fun, you should put random in the quotation marks and see what happens. You’ll need to reload zsh for it to take effect.

And, that’s it! This post is mainly to show that you can customize a lot on your machines to make your developing experience more fun.

Passed my second AWS Cert: SysOps Administrator

My first cert from AWS is the Certified Solutions Architect Associate which I passed in May of 2020. ACloudGuru had recommended if you pass the CSAA, you should immediately try and pass the SysOps Admin cert due to overlapping knowledge. So…why not? While the CSAA is all about how to build secure, resilient, and cost effective environments, SysOps is all about maintaining and monitoring of those environments.

The Exam

The Exam itself consists of 65 questions and you will need to pass with a score of 720 out of 1000. I read that there are 5 unscorable which are used for stats reason. The exam is 130 minutes long and depending where you take it at, (remote or at testing center) you won’t get a break. So think twice before attempting to chug some coffee before the exam.

Study Material

A Cloud Guru / Linux Academy / Udemy
You have three good options to establish a foundation of knowledge, A Cloud Guru (ACG), Udemy, and Linux Academy. Many have a preference for Udemy, but I stuck with ACG as they were who I used to study for my CSAA. The content collectively is about 24 hours. What I did was go through the content at normal speed, making sure to do the labs, and read any white papers recommended in the process. The quizes were a bit tricky, but not impossible. After I finished the first time, I went through the video’s a SECOND time but at x 1.2 speed.

White Papers
There are the bread and butter of AWS for structures, data resiliency, best practices, security, etc. Some are ungodly long, but they are absolutely worth it. Many of the white papers will have scenarios for data resiliency with overlapping resources and why you should use them. While that is repetitive, it’ll absolutely an amazing resource for you to read so concepts and resources are beat into your head.

FAQ
I really shouldn’t have to say it, but FAQ’s go hand in hand with White Papers. There is so much information on the FAQ section of AWS. For this exam, I focused on:
Config
Trusted Advisor
Inspector
Cloudformation
Cloudwatch
Cloudtrail
KMS
Service Catalog

Practice Tests
You NEED to purchase a practice test. I recommend this from Udemy written by Jon Bonso.
Let me explain why. The practice tests written are designed to get you into the mindset of how Amazon writes their exams. They are timed, and stick to 65 questions like what you would expect from a normal AWS exam. Jon writes his practice exams to show you not just why an answer is right, but what answers are wrong. I wasted money on one exam (I won’t point fingers) where all the writer did was explain why an answer was right. When you learn why answers are wrong, you get beat into your head design concepts over and over again.

Cheat Sheets
In comes Tutorial Dojo for the win! So if you’re not a fan of re-reading those massive FAQ’s (looking at you EC2), then look no further! I loved going here when I had a few minutes between customer calls to review important info. The content is structure as basic as possible, no fluff. Just bullet points of the different resources.

My Thoughts
I passed my CSAA first time go, no issue! So for me to take this a couple times was a bit of a gut punch. Between the two exams, this one is the hardest by far. ACloudGuru was a bit disappointing because I felt there was a lot of content not covered in detail with their SysOps Course. For the most part, their content was more geared for high level overview. In prepping for this post, I’ve read a lot of negative feedback from the community on ACG content as of recently. You can still use their platform to learn, I’m not saying ditch it, however, you will need to do more studying on the side from different sources. The exam asked A LOT about monitoring instances, databases, networking (especially with end points), and cost effective options.

Good luck!

Want to Add Automation to Terraform in AWS?

I’m studying for the Certified Kubernetes Administrator which requires me to setup multiple VMs with specific software and configurations.

Going through this process by hand is time consuming, and definitely error prone due to multiple pieces that I need to configure. So, I figured, why not automate with a Terraform file?

To make my life easier, I went ahead and created a bash script to automate the install process of kubernetes and other software components that was required. But, here is the dilemma that I ran into: how do I copy this file on my workstation to the EC2 instances that I provision? (Yes, I can do scp, but I’m lazy and don’t want to do that).

I’ll save you the time, you’ll need the provisioner component to the resource you’re defining. Happily, in my rabbit hole findings, I found you’re able to use this with other Providers, not just AWS. But in this case, we’re going to stick with AWS.

Here is the full resource defined for context.

resource "aws_instance" "prod_web" {
  count = 2

  ami           = "ami-12345"
  instance_type = "t2.medium"
  key_name      = "secretkey"

  vpc_security_group_ids = [
    aws_security_group.prod_web.id
  ]
  
  provisioner "file" {
    source      = "kubernetes-fresh-install-Ubuntu.sh"
    destination = "/home/ubuntu/kubernetes-fresh-install-Ubuntu.sh"

    connection {
      user      = "ubuntu"
      type      = "ssh"
      private_key  = "${file("secretkey.pem")}"
      host      = "${self.public_ip}"
    }
  }

  tags = {
    "linuxfoundation" : "traffic"
  }
}

Ok, so we need to break this down for the main problem. To add the file to a specific directory, you’ll need to add this in your resource brackets.

  provisioner "file" {
    source      = "kubernetes-fresh-install-Ubuntu.sh"
    destination = "/home/ubuntu/kubernetes-fresh-install-Ubuntu.sh"
 

Pretty straight forward. Use provisioner and “file” because you’re specifying that we are copying a file. Specify the file you’d like to copy with “source“, then the destination in the newly created EC2 instance(s).

Next, inside the provisioner brackets, you will need to create a connection. This is because Terraform will need to SSH into the new instances in order to copy the files.

    connection {
      user      = "ubuntu"
      type      = "ssh"
      private_key  = "${file("secretkey.pem")}"
      host      = "${self.public_ip}"
    }

Again, pretty straight forward. Define your user with the username needed to SSH into the instances, and your type is SSH. Here is where I got hung up. I had a key generated from AWS that I wanted to use, but had to dig deeper to find that you will absolutely need to use the command: "${file("path/of/file.pem")}" in order to securely SSH in. The secret key could be in your executing directory, or the ~/.ssh/ folder. After which, you have to specify the host you’re going to execute this on. because I’m creating more than one EC2 instance and I don’t know the public IP address, I’ll use “${self.public_ip}” for all the IP addresses that get assigned.

That’s it! By using provisioner and connection, you’re able to copy a file from your workstation into the newly created EC2 instances in AWS.

Check out this full example from a different user on GitHub.

Building a Lambda Python Package with Docker

Not too long ago I was working on a Lambda function that added watermarks to images uploaded to S3. For the life of me, I couldn’t get my function to work. Time and time again of troubleshooting my python script, I would run into the same error, Lambda wouldn’t recognize the Pillow package installed. I couldn’t figure out why. Eventually, I found out why…

ZIP FILES NEED TO BE ZIPED IN A LINUX ENVIRONMENT

Note: If you’re unfamiliar with using Lambda or how to import your python script with dependencies, check this out for a refresher.

I am working from a MacOS machine, so I could create a zip from a number of different options:
1. Vagrant
2. Linux VM in VirtualBox
3. Docker
I rather stick with Docker in this case since it’s a lighter footprint on my machine and I can deploy it whenever I want.

Before we begin, we need to know which versions and runtimes of Python Lambda will support. Click on the image for the source link.

From the image above, we now know that Python runtime uses Amazon Linux and Amazon Linux 2 which was created from CentOS (RedHat/RHEL).

This matters because with Lambda, we might be using an external package in our Python Application, and if we are, it HAS to be compatible with the Amazon Linux distro. Take another look at the official container image with Docker.

Creating Your Docker Container

We now need to setup our Python 3.7 environment using the AWS Docker Image from above. Start out by creating a Dockerfile in your terminal or editor of preference.

FROM amazonlinux:2018.03
RUN yum update -y
RUN yum install -y \
    gcc \
    openssl-devel \
    zlib-devel \
    libffi-devel \
    wget && \
    yum -y clean all

WORKDIR usr/src
# Install Python 3.7
RUN wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz
RUN tar xzf Python-3.7.4.tgz
RUN cd Python-3.7.4 ; ./configure --enable-optimizations; make altinstall
RUN rm Python-3.7.4.tgz
RUN rm -rf Python-3.7.4
RUN python3.7 -V
# Install pip
RUN wget https://bootstrap.pypa.io/get-pip.py
RUN python3.7 get-pip.py
RUN rm get-pip.py
RUN pip -V

Now we can build our container image from this Dockerfile. You can give it any name, but I’m gonna keep it simple. Run the command below in Terminal.
Note: It’s going to take a minute to finish

$ docker build -t lambda-linux-3.7 .

For more ease of use and organization, I’d recommend creating a new directory to house your packages and script.

$ mkdir my_lambda
$ cd my_lambda

From here add the desired python script into your directory then run the Docker image. The way AWS Lambda works, you’ll need to have the packages associated with the script packaged together in a zip file. In order to have everything together, let’s run the image and ssh into bash.

docker run -v $(pwd):/my_project -ti lambda-linux-3.7

The above command has you sshing into the now running container image. From inside the container, begin your installation of packages:

bash-4.2# pip install paramiko

At this point I’m just installing an SSH tool for Python called Paramiko. Great tool for some automation, but that’s for a different post.

When you’re done installing your desired packages, you can exit the container and find that all of those associated packages have been installed in your my_lambda directory.

Once you’re done, ZIP it.

zip -r my_lambda.zip *

From here, log into your AWS console and navigate to Lambda. Upload your new zip file and enjoy your serverless application.

Automating CLI Command Execution with Paramiko

I spend my time always playing around with environments. Sometimes it’s simple spinning up multiple VMs with Terraform, configuring with Ansible, or just running simple stress tests. In this particular case, I had spun up four CentOS VMs all of which have the same application running Folding@Home. Incase you don’t know what Folding@Home is:

Folding@home is a distributed computing project which studies protein folding, misfolding, aggregation, and related diseases. We use novel computational methods and large scale distributed computing to simulate timescales thousands to millions of times longer than previously achieved.”

I figured running this application in my Nutanix environment would be a fun project. Currently, I’m running playbooks from my Prism Central instance to automate powering on and off of these VMs based on pre-defined hours of the day. But what if I want to spin them up without logging into Prism Element? While you can define the power state of VMs with Terraform, sadly, you’re not able to yet with Nutanix. I decided to use an old friend, Python with Paramiko.

Note: Ok, I lied. It’s a combination of Python, Paramiko, and Nutanix ACLI.

What you’ll need:
1. CVM IP Address
2. Login credentials for said CVM
3. Python 3.0 installed
4. Image UUID

Alrighty, SSH into any of your CVMs with the appropriate user credentials. From here, you’ll need to gather the VMs you’d like to power on. Use the command: acli image.list

Copy to a clipboard the Image UUID.

Go to your editor of choice and use my already created script as a template.

I use Microsoft Visual Studio Code for my editing, but anything really works.

import paramiko
import sys
import config

username = sys.argv[1] # First command after your script
password = sys.argv[2] # Second command after your script

client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('10.48.2.15', port=22, username=username, password=password)
stdin, stdout, stderr = client.exec_command("acli vm.on cbd69a63-c0a1-404c-969a-9816e085372f && acli vm.on b637a621-6cf2-442c-a513-5caeb108e96f && acli vm.on 28d4c186-c05c-4709-814f-eb635b4f269d")
lines = stdout.read()

print(lines)
client.close()

The script is pretty standard from what you’ll get on Paramiko’s documentation site. There are two points that I want to highlight for your future uses:

1. In the documentation, you’d need to store your username and password in plain text on the same script. This works, but it is a HUUUGE no no as it is a security issue. You can store your passwords in a separate file, but I decided to import sys so I can use the sys.argv[] feature to input the username and password on the command line instead.

2. client.exec_command(“”) is where you’ll need to enter your Nutanix acli commands. Since I want to power on three VMs at the same time, I added “&&” in between each command so they will all be run at the same time. The commands being acli vm.off <Image_UUID>

When you’re done making your edits, go ahead and run the script from your command line.

python PowerOnVM.py admin password

After the python script, there is the username (admin) and password (password) that you need to enter to log into the CVM and execute the acli commands.

And that’s it! Simple way to execute terminal commands on a remote host.