Managing Kubernetes Clusters with Terraform

Summary

Kubernetes is an attractive option for many companies to deploy their applications. However, managing it can be complicated. There are many different tools and techniques to manage it. In this post, I will discuss managing your Kubernetes cluster with Terraform, another tool is widely used these days.

Kubernetes Management

With the promise of easily deploying and running containers, many people are choosing Kubernetes as their orchestration engine. However, Kubernetes is a complicated beast. In most cases you will need to configure a lot of different resources just to establish a baseline for how you want it to run.

These may include resources for IAM, logging, monitoring, and service discovery, to name a few. How are we to manage all of these resources?

Fortunately, Kubernetes resources are perfectly built to work with Terraform. With the idempotent, and simple REST API, it’s a natural fit and their is a Terraform provider specifically for Kubernetes.

Terraform Kubernetes Provider

The Terraform provider for Kubernetes offers resources for all the objects you would need to get a Kubernetes cluster up and running. It can create and manage deployments, cluster roles, ingress, pods, and much more. Take the following example from the official docs, you could create a service using the following:

resource "kubernetes_service" "example" {
  metadata {
    name = "terraform-example"
  }
  spec {
    selector = {
      app = "${kubernetes_pod.example.metadata.0.labels.app}"
    }
    session_affinity = "ClientIP"
    port {
      port        = 8080
      target_port = 80
    }

    type = "LoadBalancer"
  }
}

As you can see, the resource provides a nice, typed data structure for you to create your Kubernetes resources. This is very beneficial when using an IDE that supports Terraform.

The downside is that you will spend a lot of time converting from the “native” format of YAML, to the HCL in Terraform. This can be a painful experience and is not as copy-paste friendly as using the YAML. However, once it has been converted, it’s hard to argue that you aren’t better off with a typed data structure than you would be with just raw YAML.

Note: There are a few Terraform providers out there that will work with the raw YAML. But, I did not consider these as they are not “official”. In addition, there is some discussion to adding this capability to the official provider, but it’s debatable on whether this should happen or not.

Helm

Another useful tool for managing Kubernetes is Helm. Many people use this and you could get by using nothing but helm to manage your baseline configuration. A Helm chart is essentially a package of all the resources needed to deploy to Kubernetes and it is very convenient to deploy helm charts from the stable repository. This makes it easy to spin up new charts fairly easily.

To that end, there is also a Helm provider for Terraform. It is a very simple provider, only offering a data resource for configuring repositories, and a release resource for deploying Helm releases.

From the official docs:

resource "helm_release" "example" {
  name       = "my-redis-release"
  repository = data.helm_repository.stable.metadata[0].name
  chart      = "redis"
  version    = "6.0.1"

  values = [
    "${file("values.yaml")}"
  ]

  set {
    name  = "cluster.enabled"
    value = "true"
  }

  set_string {
    name  = "service.annotations.prometheus\\.io/port"
    value = "9127"
  }
}

As you can see, the release resource allows you to specify the repo to pull the release from and override specific properties. Alternatively, you can specify configuration override files. These are YAML files that Helm will use to override certain values in the Helm chart. These values are all dependent on the chart you are trying to install so you will have to refer to their documentation on how to get it setup.

Getting Started

“both of these providers allow you to import existing resources into Terraform”

Both the Helm, and Kubernetes providers will work with the same configuration you are using for kubectl. If you can execute kubectl get nodes, then you can use it to manage your cluster. However, you do not need to have the helm and kubectl tools installed for it to work. Both of these providers allow you to import existing resources into Terraform. It’s never too late to start! away on it.

One note for people using older versions of Helm (< 3). Later versions of this provider are compiled with the helm source and do not utilize the helm CLI. So you will need to lock down the helm provider version in your Terraform manifest.

  required_providers {
    helm = "~> 0.10.4" // pin version to < 1.0.0 to use helm CLI
  }

Summary

In my case, I first attempted to install my baseline configuration with a single helm chart. In order to do that, you have to figure out a lot of things. For one, helm chart templating can be complicated with all the overrides. In addition, all the resources had to exist in the same namespace.

I found the Kubernetes and Helm providers for Terraform to be extremely useful. Not only do they run fast, but it’s easier to manage the resources using typed data structures. I will admit that it was painful at times to convert between the YAML format to HCL. But once you got passed that you never think about it.

It’s also nice that it fits in with your existing Terraform workflow (if you have them). I highly recommend people look into managing Kubernetes with this toolset.

Avatar
Kerry Wilson
AWS Certified IQ Expert | Cloud Architect

Coming from a development background, Kerry’s focus is on application development, infrastructure and security automation, and applying agile software development practices to IT operations in the cloud.

Related