Google GKE and SQL With Terraform
By the end of this tutorial, you should have a functional version of Kubernetes running on GKE and PostgreSQL using Google Cloud SQL offering.
Join the DZone community and get the full member experience.
Join For FreeA few weeks back, I started testing Kubernetes offerings from a few cloud providers: Google GKE, Amazon AWS EKS and Microsoft Azure AKS. In this 1st article, we will discuss how to set up Kubernetes Google cloud offering and SQL PostgreSQL with Terraform, using dedicated project and Terraform service-account for automated deployment.
By the end of this tutorial, you should have a functional version of Kubernetes running on GKE and PostgreSQL using Google Cloud SQL offering.
NOTE: This tutorial is not secured and is not production ready.
This tutorial is structured into 5 parts:
- Initial tooling setup of gcloud, kubectl, and Terraform
- Creating a Google Cloud project and service account for Terraform
- Creating backend storage to tfstate file in Cloud Storage
- Setting up separate projects for development and production environments
- Creating a Kubernetes cluster on GKE and PostgreSQL on Cloud SQL
NOTE: This tutorial will not describe in details deployment tools used.
1. Initial Tooling Setup gcloud, kubectl, and Terraform
Assuming you already have a Google Cloud account, we will need additional binaries for gcloud CLI, Terraform, and kubectl.
gcloud deployment differs from Linux distribution and you can follow the link to deploy for OSX and diff Linux distributions
Deploying Terraform
OS X
curl -o terraform_0.11.7_darwin_amd64.zip https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_darwin_amd64.zip
unzip terraform_0.11.7_linux_amd64.zip -d /usr/local/bin/
Linux
curl https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip > terraform_0.11.7_linux_amd64.zip
unzip terraform_0.11.7_linux_amd64.zip -d /usr/local/bin/
Verification
Verify Terraform
version 0.11.7 or higher is installed:
terraform version
Deploying kubectl
OS X
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.11.0/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
Linux
wget https://storage.googleapis.com/kubernetes-release/release/v1.11.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
Verification
Verify kubectl
version 1.11.0 or higher is installed:
kubectl version --client
Authenticate to gcloud
Before configuring gcloud CLI, you can check the available zones and regions nearest to your location:
gcloud compute regions list
gcloud compute zones list
Follow gcloud init and select the default Zone Ex. asia-south1:
gcloud init
2. Creating Google Cloud Project and Service Account for Terraform
The best practice is to use separate accounts "technical account" to manage infrastructure. This account can be used in automated code deployment like in Jenkins, CircleCI, or any other tool you may choose.
Set Up Environment
export TF_VAR_org_id=YOUR_ORG_ID
export TF_VAR_billing_account=YOUR_BILLING_ACCOUNT_ID
export TF_ADMIN=terraform-admin-example
export TF_CREDS=~/.config/gcloud/terraform-admin.json
Find the values for YOUR_ORG_ID and YOUR_BILLING_ACCOUNT_ID
gcloud organizations list
gcloud beta billing accounts list
Create the Terraform Admin Project
Create a new project and link it to your billing account:
gcloud projects create ${TF_ADMIN} \
--organization ${TF_VAR_org_id} \
--set-as-default
gcloud beta billing projects link ${TF_ADMIN} \
--billing-account ${TF_VAR_billing_account}
Create the Terraform Service Account
Create the service account in the Terraform admin project and download the JSON credentials:
gcloud iam service-accounts create terraform \
--display-name "Terraform admin account"
gcloud iam service-accounts keys create ${TF_CREDS} \
--iam-account terraform@${TF_ADMIN}.iam.gserviceaccount.com
Grant the service account permission to view the Admin Project and manage Cloud Storage.
gcloud projects add-iam-policy-binding ${TF_ADMIN} \
--member serviceAccount:terraform@${TF_ADMIN}.iam.gserviceaccount.com \
--role roles/viewer
gcloud projects add-iam-policy-binding ${TF_ADMIN} \
--member serviceAccount:terraform@${TF_ADMIN}.iam.gserviceaccount.com \
--role roles/storage.admin
Enabled API for Newly Created Projects
gcloud services enable cloudresourcemanager.googleapis.com && \
gcloud services enable cloudbilling.googleapis.com && \
gcloud services enable iam.googleapis.com && \
gcloud services enable compute.googleapis.com && \
gcloud services enable sqladmin.googleapis.com && \
gcloud services enable container.googleapis.com
Add Organization/Folder-Level Permissions
Grant the service account permission to create projects and assign billing accounts:
gcloud organizations add-iam-policy-binding ${TF_VAR_org_id} \
--member serviceAccount:terraform@${TF_ADMIN}.iam.gserviceaccount.com \
--role roles/resourcemanager.projectCreator
gcloud organizations add-iam-policy-binding ${TF_VAR_org_id} \
--member serviceAccount:terraform@${TF_ADMIN}.iam.gserviceaccount.com \
--role roles/billing.user
3. Creating Backend Storage to tfstate File in Cloud Storage
Terraform stores the state about infrastructure and configuration by default local file "Terraform.tfstate. The state is used by Terraform to map resources to configuration and track metadata.
Terraform allows state file to be stored remotely, which works better in a team environment or automated deployments.
We will use Google Storage and create a new bucket where we can store state files.
Create the remote back-end bucket in Cloud Storage for storage of the Terraform.tfstate file
gsutil mb -p ${TF_ADMIN} -l asia-southeast1 gs://${TF_ADMIN}
Enable versioning for said remote bucket:
gsutil versioning set on gs://${TF_ADMIN}
Configure your environment for the Google Cloud Terraform provider
export GOOGLE_APPLICATION_CREDENTIALS=${TF_CREDS}
4. Setting Up Separate Projects for Development and Production Environments
In order to segregate Development environment we will use Google cloud projects that allows us to segregate infrastructure bur maintain same time same code base for Terraform.
Terraform allow us to use separate tfstate file for different environment by using Terraform functionality workspaces.
Let's see current file structure
.
├── backend.tf
├── main.tf
├── outputs.tf
├── terraform.tfvars
└── variables.tf
The 1st step is to keep sensitive information outside of the external git repository. The best practice is to create Terraform.tfvars and keep sensitive information and add .tfvars to .gitignore.
.gitignore
*.tfstate
*.tfstate.backup
*.tfvars
.terraform
tfplan
Create Terraform.tfvars file in the project folder and replace "XXXXXX" with the proper data. In our case, tfvars files data is referenced in variables.tf, where we keep variables for main.tf.
billing_account = "XXXXXX-XXXXXX-XXXXXX"
org_id = "XXXXXXXXXXX"
backend.tf allows us to use a newly created Google storage bucket to keep our tfstate files.
terraform {
backend "gcs" {
bucket = "terraform-admin-mmm"
prefix = "terraform-project"
}
}
Variable used in Terraform main.tf file:
# GCP variables
variable "region" {
default = "asia-southeast1"
description = "Region of resources"
}
variable "project_name" {
# default = "test"
default = {
prod = "prod"
dev = "dev"
}
description = "The NAME of the Google Cloud project"
}
variable "billing_account" {
description = "Billing account STRING."
}
variable "org_id" {
description = "Organisation account NR."
}
Outputs: Once Terraform deploys the new infrastructure, we will need some outputs that we can reuse for GKE and SQL setup.
# project creation output
output "project_id" {
value = "${google_project.project.project_id}"
}
Finally, the main source of the gcloud project creation:
provider "google" {
version = "~> 1.16"
region = "${var.region}"
}
provider "random" {}
resource "random_id" "id" {
byte_length = 4
prefix = "terraform-${var.project_name[terraform.workspace]}-"
}
resource "google_project" "project" {
name = "terraform-${var.project_name[terraform.workspace]}"
project_id = "${random_id.id.hex}"
billing_account = "${var.billing_account}"
org_id = "${var.org_id}"
}
resource "google_project_services" "project" {
project = "${google_project.project.project_id}"
services = [
"bigquery-json.googleapis.com",
"compute.googleapis.com",
"container.googleapis.com",
"containerregistry.googleapis.com",
"deploymentmanager.googleapis.com",
"dns.googleapis.com",
"logging.googleapis.com",
"monitoring.googleapis.com",
"oslogin.googleapis.com",
"pubsub.googleapis.com",
"replicapool.googleapis.com",
"replicapoolupdater.googleapis.com",
"resourceviews.googleapis.com",
"servicemanagement.googleapis.com",
"sql-component.googleapis.com",
"sqladmin.googleapis.com",
"storage-api.googleapis.com",
]
}
Initialize and Pull Terraform Cloud-Specific Dependencies
Terraform uses a modular setup and in order to download a specific plugin for the cloud provider, Terraform will need to be 1st initiated.
terraform init
Once we have our project code and our tfvar secretes secure, we can create workspaces for Terraform.
NOTE: in below example we will use only dev workspace but you can use both following same logic
Create dev workspace:
terraform workspace new dev
List available workspaces:
terraform workspace list
Switch between workspaces:
terraform workspace select dev
The Terraform plan will simulate what changes will be done on the cloud provider:
terraform plan
Apply Terraform plan for selected environment:
terraform apply
With the above code, we only created a new project in Google Cloud and this depends on what Terraform workspace we are in.
Below is the sequence of commands to run:
5. Creating a Kubernetes Cluster on GKE and PostgreSQL on Cloud SQL
Once we have the project ready for dev and prod, we can move into deploying our gke and sql infrastructure.
Code structure
.
├── backend
│ ├── firewall
│ │ ├── main.tf
│ │ └── variables.tf
│ ├── subnet
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ └── vpc
│ ├── main.tf
│ └── outputs.tf
├── backend.tf
├── cloudsql
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
├── gke
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
├── main.tf
├── outputs.tf
└── variables.tf
Now is time to deploy our infrastructure. Noticeable differences between prod and dev workspaces you can find in the Terraform files.
- dev — single instance of PostgreSQL without replication and read replica
- prod — single instance in multi-AZ for high availability and additional one read replica for PostgreSQL
- dev — single Kubernetes node will be added to GKE
- prod — two nodes will be created and added to Kubernetes GKE
In order to keep our code clean, I decided to use modules for every segment: Networking (vpc, subnets, and firewall), cloudsql, and gke. All these modules can be maintained in separate git repositories and can be called byroot main.tf file.
# Configure the Google Cloud provider
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config {
bucket = "${var.bucket_name}"
prefix = "terraform-project"
}
}
provider "google" {
version = "~> 1.16"
project = "${data.terraform_remote_state.project_id.project_id}"
region = "${var.region}"
}
module "vpc" {
source = "./backend/vpc"
}
module "subnet" {
source = "./backend/subnet"
region = "${var.region}"
vpc_name = "${module.vpc.vpc_name}"
subnet_cidr = "${var.subnet_cidr}"
}
module "firewall" {
source = "./backend/firewall"
vpc_name = "${module.vpc.vpc_name}"
ip_cidr_range = "${module.subnet.ip_cidr_range}"
}
module "cloudsql" {
source = "./cloudsql"
region = "${var.region}"
availability_type = "${var.availability_type}"
sql_instance_size = "${var.sql_instance_size}"
sql_disk_type = "${var.sql_disk_type}"
sql_disk_size = "${var.sql_disk_size}"
sql_require_ssl = "${var.sql_require_ssl}"
sql_master_zone = "${var.sql_master_zone}"
sql_connect_retry_interval = "${var.sql_connect_retry_interval}"
sql_replica_zone = "${var.sql_replica_zone}"
sql_user = "${var.sql_user}"
sql_pass = "${var.sql_pass}"
}
module "gke" {
source = "./gke"
region = "${var.region}"
min_master_version = "${var.min_master_version}"
node_version = "${var.node_version}"
gke_num_nodes = "${var.gke_num_nodes}"
vpc_name = "${module.vpc.vpc_name}"
subnet_name = "${module.subnet.subnet_name}"
gke_master_user = "${var.gke_master_user}"
gke_master_pass = "${var.gke_master_pass}"
gke_node_machine_type = "${var.gke_node_machine_type}"
gke_label = "${var.gke_label}"
}
All variables that are consumed by modules I keep in a single variable.tf file.
We will use the same Google storage bucket but with a different prefix; not to conflict with project creation Terraform plan.
# Configure the Google Cloud tfstate file location
terraform {
backend "gcs" {
bucket = "terraform-admin-mmm"
prefix = "terraform"
}
}
As Terraform needs to be aware of the new projects we created in the previous step, we will import state from Terraform on the 1st run:
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config {
bucket = "${var.bucket_name}"
prefix = "terraform-project"
}
}
We are now ready to to run our plan and create infrastructure.
- As we are in separate code bases, we will need to follow the same sequence as in the project creation.
- Just make sure you have new Terraform.tfvars
bucket_name = "terraform-admin-example"
gke_master_pass = "your-gke-password"
sql_pass = "your-sql-password"
Initialize and pull Terraform cloud-specific dependencies:
terraform init
Create dev workspace:
terraform workspace new dev
List available workspaces:
terraform workspace list
Switch between workspaces:
terraform workspace select dev
The Terraform plan will simulate what changes will be done on the cloud provider:
terraform plan
Apply Terraform:
terraform apply
To check what Terraform deployed, use:
terraform show
Once the test is completed, you can remove "destroy" all buildup infrastructure.
terraform destroy -auto-approve
Terraform Tips
Refresh Terraform
terraform refresh
List and show Terraform state file
terraform state list
terraform state show
Use tflint to check syntax of the tf files
tflint
Destroy only selected module Ex.
terraform destroy - target=module.cloudsql
Any change done on the module Terraform needs to be initiated or syncronized with later updates from modules.
terraform get -update
The source code for this article can be found in GitHub.
Published at DZone with permission of Ion Mudreac. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments