manage local kubernetes with terraform

🎯 Objective

Why deploy with terraform?

Quote

While you could use kubectl or similar CLI-based tools to manage your Kubernetes resources, using Terraform has the following benefits:

  • Unified Workflow - If you are already provisioning Kubernetes clusters with Terraform, use the same configuration language to deploy your applications into your cluster.
  • Full Lifecycle Management - Terraform doesn’t only create resources, it updates, and deletes tracked resources without requiring you to inspect the API to identify those resources.
  • Graph of Relationships - Terraform understands dependency relationships between resources. For example, if a Persistent Volume Claim claims space from a particular Persistent Volume, Terraform won’t attempt to create the claim if it fails to create the volume.

src: Manage Kubernetes resources via Terraform | Terraform | HashiCorp Developer

Also because we are using terraform at work, and the ops are planning to use it to manage the kubernetes cluster.

Launch local kubernetes

I used k3d as my local kubernetes. But there are several other alternatives you can check out yourself.

Installation

With nix

It’s as simple as (also install kubectl so you can interact with your kubernetes cluster):

{ pkgs, ... }: {
  home.packages = with pkgs; [ k3d kubectl ];
}

Launch

Create a default.yml with the following content:

---
# see https://k3d.io/v5.6.3/usage/configfile/ for complete config
apiVersion: k3d.io/v1alpha5
kind: Simple
servers: 1
agents: 0
image: docker.io/rancher/k3s:v1.30.1-k3s1
# ingress
ports:
  - port: 80:80
    nodeFilters:
      - server:0
# will use host docker registry
registries:
  create:
    name: registry.localhost
    host: "0.0.0.0"
    hostPort: "5000"

This will specify the version of k3s and create a local docker registry (useful for local tests). Then execute the following command:

$ # create the cluster
$ k3d cluster create --config default.yml
 
$ # wait a bit and you can see the cluster is created
$ k3d cluster list
NAME          SERVERS   AGENTS   LOADBALANCER
k3s-default   1/1       0/0      true
 
$ # or using kubectl
$ kubectl get nodes
NAME                       STATUS   ROLES                  AGE     VERSION
k3d-k3s-default-server-0   Ready    control-plane,master   3d21h   v1.30.1+k3s1
Link to original
.

Getting started with terraform

Installation

With nix

It’s as simple as:

{ pkgs, ... }: {
  home.packages = with pkgs; [ terraform ];
}
Link to original

Configure terraform kubernetes provider

Create a kubernetes.tf that defined the kubernetes cluster to connect to:

terraform {
  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
    }
  }
}
 
variable "host" {
  type = string
}
 
variable "client_certificate" {
  type = string
}
 
variable "client_key" {
  type = string
}
 
variable "cluster_ca_certificate" {
  type = string
}
 
provider "kubernetes" {
  host = var.host
 
  client_certificate     = base64decode(var.client_certificate)
  client_key             = base64decode(var.client_key)
  cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
}

Create a localhost.tfvars which will contains the variable of the localhost environment:

# Get the information from: kubectl cluster-info --context k3d-k3s-default
host                   = "https://0.0.0.0:41235"
# Get the values from: kubectl config view --minify --flatten --context=k3d-k3s-default
client_certificate     = "LS0tLS1..."
client_key             = "LS0tLS1..."
cluster_ca_certificate = "LS0tLS1..."

Tip

Even better, we can reference the kubernetes provider with a local file:

provider "kubernetes" {
  config_path = "~/.kube/config"
}

Then, no need to manually get the host, client certificate, …

Then execute the commands:

terraform init
terraform workspace new demo

You can test the integration by deploying a Nginx. First, create a nginx.tf with the following content:

resource "kubernetes_deployment" "nginx" {
  metadata {
    name = "nginx-example"
    labels = {
      App = "NginxExample"
    }
  }
 
  spec {
    replicas = 1
    selector {
      match_labels = {
        App = "NginxExample"
      }
    }
    template {
      metadata {
        labels = {
          App = "NginxExample"
        }
      }
      spec {
        container {
          image = "nginx:1.7.8"
          name  = "example"
 
          port {
            container_port = 80
          }
 
          resources {
            limits = {
              cpu    = "0.5"
              memory = "512Mi"
            }
            requests = {
              cpu    = "250m"
              memory = "50Mi"
            }
          }
        }
      }
    }
  }
}

The execute the following:

terraform apply -auto-approve

You will see the Pods are deployed correctly (if everything went well):

$ kubectl get po --namespace default
NAME                            READY   STATUS    RESTARTS   AGE
nginx-example-9d5cc4b86-t64pr   1/1     Running   0          14s

To remove the Pods, execute the following:

terraform destroy

Add helm provider

Create a versions.yml with the following:

terraform {
  required_providers {
    helm = {
      source  = "hashicorp/helm"
      version = "~> 2.14.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.31.0"
    }
  }
  required_version = "~> 1.8"
}

Tip

Don’t forget to remove the provider from kubernetes.tf, otherwise, you will have two kubernetes providers.

Then execute the following to download the helm provider:

terraform init

Then update nginx.tf with the following:

provider "helm" {
  kubernetes {
    config_path = "~/.kube/config"
  }
}
 
resource "helm_release" "nginx" {
  name       = "nginx"
  repository = "https://charts.bitnami.com/bitnami"
  chart      = "nginx"
 
  values = [
    file("${path.module}/nginx-values.yaml")
  ]
}

Create the nginx-values.yaml which is the helm values to use for the helm chart:

replicaCount: 1

Let’s try it:

$ terraform apply --auto-approve -var-file localhost.tfvars
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
 
  # helm_release.nginx will be created
  + resource "helm_release" "nginx" {
      + atomic                     = false
      + chart                      = "nginx"
...

Deploy local application to kubernetes using terraform and helm

Create a local helm chart using the following:

helm create src/backend-app

Make your changes and when you are ready, generate the tarball using the following command:

$ helm package ./src/backend-app -d ./dist
$ ls dist
backend-app-0.1.0.tgz

Create a src/backend-app/h-values.yml that will contain the overrided values, e.g.

targetEnv: demo
appName: backend-app

Create a src/terraform/backend-app.tf with the following:

resource "helm_release" "backend_app" {
  name       = "backend-app"
  chart      = "../../dist/backend-app-0.1.0.tgz"
  namespace  = terraform.workspace
 
  values = [
    file("../backend-app/h-values.yml")
  ]
}

And apply your changes:

$ terraform apply -auto-approve
 
Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
  + create
 
Terraform will perform the following actions:
 
  # helm_release.backend_app will be created
  + resource "helm_release" "backend_app" {
      + atomic                     = false
      + chart                      = "../../dist/backend-app-0.1.0.tgz"
      + cleanup_on_fail            = false
...
Plan: 2 to add, 0 to change, 0 to destroy.
...
helm_release.backend_app: Creation complete after 32s [id=backend-app]

Use terraform templating feature to dynamically generate helm values

Yes, we will “template” (not a verb, but you get it) a templating tool… It’s kind of going deeper on the templating world… At one point, it will be quite hard to know what was wrongly interpolated from all those layer of templates…

Anyway, we want to use terraform variables in the helm values file so we can dynamically generate them. Let’s create a h-values.yml.tpl and change its content, e.g.:

targetEnv: ${targetEnv}
app:
  name: ${appName}
 
# Other properties...

Then let’s edit the backend-app.tf:

resource "helm_release" "backend_app" {
  name       = "backend-app"
  chart      = "../../dist/backend-app-0.1.0.tgz"
  namespace  = terraform.workspace
 
  values = [
    templatefile("h-values.yml.tpl", {
      targetEnv = terraform.workspace
      appName = "backend-app"
    })
  ]
}

As you can see, the targetEnv in the h-values.yml will be replace by terraform workspace name (which is demo).

So by using terraform templating feature, we can also use other terraform features/plugins, like sops.