Connecting your Kubernetes Application to Cloud SQL

If you couldn’t tell, I’m a bit of a kubernetes nerd. Started learning in 2018, and always had a passion for tinkering with the tool. But one thing I’ve always shied away from are Databases.

So, after some thought, realized that I need to know DB’s like the back of my hand. In my next post, we’ll talk about Databases and use cases behind them, but for this, let’s get your hands dirty. In this post, we’re going to deploy a simple Kubernetes application, then connect to a cloud run MySQL instance in GCP’s Cloud SQL. We’ll learn the “why” of this architecture which is:

  • Protect your database from unauthorized access by using an unprivileged service account on your GKE nodes.
  • Put privileged service account credentials into a container running on GKE.
  • Use the Cloud SQL Proxy to offload the work of connecting to your Cloud SQL instance and reduce your applications knowledge of your infrastructure.

What you’ll need for this:
1. GCP account with ability to create Google Kubernetes Engine (GKE) clusters.
2. Ability to clone from Github

Cloud SQL Proxy
By using the Cloud SQL Proxy, you delegate connection management to Google. This frees your application from knowing connection details and streamlines secret handling. The Cloud SQL Proxy, conveniently provided as a Docker container by Google, can run alongside your application within the same GKE pod for a seamless setup.

Architecture

The application and its sidecar container are deployed in a single Kubernetes (k8s) pod running on the only node in the GKE cluster. The application communicates with the Cloud SQL instance via the Cloud SQL Proxy process listening on localhost.

The k8s manifest builds a single-replica Deployment object with two containers, pgAdmin and Cloud SQL Proxy. There are two secrets installed into the GKE cluster: the Cloud SQL instance connection information and a service account key credentials file, both used by the Cloud SQL Proxy containers Cloud SQL API calls.

The application doesn’t have to know anything about how to connect to Cloud SQL, nor does it have to have any exposure to its API. The Cloud SQL Proxy process takes care of that for the application. It’s important to note that the Cloud SQL Proxy container is running as a ‘sidecar’ container in the pod.

Getting Started
Log in to your GCP console and select your Project. From there, you’ll need to activate a Cloud Shell which you can locate via the top right part of the console.

When you’re activated, this is what you should see.

Keep in mind that this can be done from the terminal on your work/personal computer as well. But I want to make this fairly simple so we can go with the provided console.

When you open a console in GCP, your credentials and PROJECT_ID will be connected automatically so you won’t need to do anything additional.

Next, you will need to download the demo resources. Lucky for us, Engineers at GCP went ahead and created some code for us to deploy and see how this works in the cloud. Run the following code for this project.

gsutil cp gs://spls/gsp449/gke-cloud-sql-postgres-demo.tar.gz .
tar -xzvf gke-cloud-sql-postgres-demo.tar.gz

From here, you’ll need to execute code from the directory.

cd gke-cloud-sql-postgres-demo

Now, the fun begins.

DEPLOYMENT

This particular deployment is automated, but you’ll need top define a few parameters in order:

  • A username for your Cloud SQL instance – (You create this, any name works)
  • A username for the pgAdmin console – (Also, you create this)
  • USER_PASSWORD – the password to login to the Postgres instance
  • PG_ADMIN_CONSOLE_PASSWORD – the password to login to the pgAdmin UI

Let’s start by saving your account into a variable that we’ll need for later

PG_EMAIL=$(gcloud config get-value account)

Run the command below to deploy the script and create the 2 usernames. Keep in mind that you’ll need to create a password for both.
./create.sh dbadmin $PG_EMAIL

While this is deploying, you should understand the different scripts being run. Note: this may take up to 10 min.

  • enable_apis.sh – enables the GKE API and Cloud SQL Admin API.
  • postgres_instance.sh – creates the Cloud SQL instance and additional Postgres user. Note that gcloud will timeout when waiting for the creation of a Cloud SQL instance so the script manually polls for its completion instead.
  • service_account.sh – creates the service account for the Cloud SQL Proxy container and creates the credentials file.
  • cluster.sh – Creates the GKE cluster.
  • configs_and_secrets.sh – creates the GKE secrets and configMap containing credentials and connection string for the Cloud SQL instance.
  • pgadmin_deployment.sh – creates the pgAdmin4 pod.

Next, let’s use the load balancer to expose the pod in order to connect to the instance, then delete the services when finished to avoid unauthorized access.

  1. Run the following to get the Pod ID:
POD_ID=$(kubectl --namespace default get pods -o name | cut -d '/' -f 2)
  1. Expose the pod via load balancer:
kubectl expose pod $POD_ID --port=80 --type=LoadBalancer
  1. Get the service IP address:
kubectl get svc

Output:

Note: Keep in mind that sometimes waiting for an external IP to be assigned will take a couple min. Be patient.


Next, we need to access the SQL instance. On the lefthand menu, navigate to SQL. From there, click in Connections and then Networking.

With Public IP box checked, click Add a Network.

Name the network and give it public access:
0.0.0.0/0

Click Done, then click Save.

Open a new browser tab using the pgAdmin IP:

http://<SVC_IP>

Sign in to the pgAdmin UI with the following:

  • <PGADMIN_USERNAME> your GCP email in the “Email Address” field
  • <PG_ADMIN_CONSOLE_PASSWORD> that you defined earlier

Return to the Cloud console, and the SQL page. Click on the Overview tab.

Copy the Public IP address.

In the pgAdmin console, from the left pane click Servers, then click Add New Server.

On the General tab, give your server a name, then click on the Connection tab.

Use the <DATABASE_USER_NAME>(dbadmin) and <USER_PASSWORD> you created earlier to connect to 127.0.0.1:5432:

Next, create a new connection to your already spun up database:

  • Host name: paste the public IP address you copied
  • Username: <DATABASE_USER_NAME>(dbadmin)
  • Password: <USER_PASSWORD> you created

Click Save.

Congrats! At this point you deployed a GKE cluster with an application that connects to your Cloud SQL instance via a proxy.

After Project Thoughts

This project was surprisingly easy and fairly enjoyable to learn. Here I was able to learn how to decouple my database and instead of managing myself, I now can leverage GCP’s hosted service which will save me a lot of time and energy. This helps to broaden my imagination when it comes to connectability with workloads and Cloud hosted services.

Give it a go!