Sunil Gajula
Sunil Gajula

Reputation: 1223

How To Run kubectl apply commands in terraform

I have developed a terraform script to create a k8 cluster on GKE.

Post successful creation of cluster, I have set of yaml files to be applied on k8 cluster.

How can I invoke the below command in my terraform script?

kubectl create <.yaml>

Upvotes: 34

Views: 65825

Answers (11)

Hongbo Miao
Hongbo Miao

Reputation: 49804

Context

I tried to use kubectl_path_documents, but met an issue and end up with below solution.

Solution

locals {
  manifest_dir_path  = "gateway-api/manifests"
}

data "kubectl_path_documents" "main" {
  pattern = "${local.manifest_dir_path}/*.yaml"
}

resource "kubectl_manifest" "cert_manager_manifest" {
  count = length(
    flatten(
      toset([
        for f in fileset(".", data.kubectl_path_documents.main.pattern) : split("\n---\n", file(f))
      ])
    )
  )
  yaml_body         = element(data.kubectl_path_documents.main.documents, count.index)
  server_side_apply = true
  wait              = true
}

Reference

Credit to Carlos Alexandre posted the solution at GitHub and thank you!

Upvotes: 0

Jiarui Tian
Jiarui Tian

Reputation: 1

I would recommand alway using kubectl_path_documents.

if the yaml file contains more than object event when split by ---, you have to use kubectl_path_documents, this is not only for loading multiple yaml files but also handle the multiple objects when split by --- in one yaml.

data "kubectl_path_documents" "metrics_server_yaml" { // somehow pattern only supports search in local file directory, not from remote url. pattern = "${path.module}/yamls/metrics-server.yaml" }

resource "kubectl_manifest" "metrics_server_manifests" { count = length(data.kubectl_path_documents.metrics_server_yaml.documents) yaml_body = element(data.kubectl_path_documents.metrics_server_yaml.documents, count.index) }

if you need to load the yaml for https url, you can run "curl" in a null_resource .

Upvotes: 0

sam
sam

Reputation: 1896

Adding some more information to existing answers for 2022 viewers. Given question is missing information about where these terraform & kubectl command are need to be executed

Case 1: Developer is using a local system that is not part of Google Cloud Platform In this case, when you are using null_resource to execute a command then your command will run in you local pc not in google cloud.

Case 2: Developer is using a system that is part of Google Cloud Platform like Google Console Terminal or Google Cloud Code

In this case, when you are using null_resource to execute a command then your command will run in a temporary environment that is actively managed google cloud.

From here, it is possible to execute kubectl command using terraform. Said that, next question is it a good approach to do? It's not.

GCP is not a free resource. If you are building/using tools, it is important to use the right tool for right resource. In DevOps operations, there are two core areas. One is setting up infra where Terraform works best and next is managing infra where Ansible works best. For the same reason, GCP actively provide support for these.

Upvotes: 1

maxdebayser
maxdebayser

Reputation: 1066

You can also also use the helm provider together with the itscontained chart. For example, the tekton dashboard could be installed like this, building on Yuri's Yaml splitting expression:

data "http" "tekton_dashboard_install" {
  url = "https://storage.googleapis.com/tekton-releases/dashboard/previous/v0.26.0/tekton-dashboard-release.yaml"

  request_headers = {
    Accept = "application/octet-stream"
  }
}

locals {
  tekton_dashboard_manifests = [
    for yaml in split(
      "\n---\n",
      "\n${replace(data.http.tekton_dashboard_install.body, "/(?m)^---[[:blank:]]*(#.*)?$/", "---")}\n"
    ) :
    yamldecode(yaml)
    if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
  ]
}

resource "helm_release" "tekton_dashboard" {
  name       = "tekton_dashboard"
  repository = "https://charts.itscontained.io"
  chart      = "raw"
  version    = "0.2.5"
  namespace  =  "tekton-pipelines"

  values = [
    yamlencode({ resources = local.tekton_dashboard_manifests })
  ]
}

Some YAML files like the tekton core file come with a namespace definition, which must be filtered out first. Which is easy after the YAML is parsed.

This solution avoids the "Provider produced inconsistent result after apply" problems with the kubernetes_manifest resources and the hacky workarounds that follow.

Upvotes: 0

Yuri Astrakhan
Yuri Astrakhan

Reputation: 9965

When creating multiple Terraform resources, it is usually better to use named resource instances than to use a list (count). If the source file is updated and the order of Kubernetes resources change, it may cause Terraform to delete/create resources just because of the index change. This code creates a key by concatenating kind and metadata.name fields:

data "kubectl_file_documents" "myk8s" {
  content = file("./myk8s.yaml")
}

resource "kubectl_manifest" "myk8s" {
  # Create a map of { "kind--name" => raw_yaml }
  for_each  = {
    for value in [
      for v in data.kubectl_file_documents.myk8s.documents : [yamldecode(v), v]
    ] : "${value.0["kind"]}--${value.0["metadata"]["name"]}" => value.1
  }
  yaml_body = each.value
}

In the future you may want to use Hashicorp's official kubernetes_manifest resource instead (as of 2.5.0 -- in beta, buggy):

resource "kubernetes_manifest" "default" {
  for_each = {
    for value in [
      for yaml in split(
        "\n---\n",
        "\n${replace(file("./myk8s.yaml"), "/(?m)^---[[:blank:]]*(#.*)?$/", "---")}\n"
      ) :
      yamldecode(yaml)
      if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
    ] : "${value["kind"]}--${value["metadata"]["name"]}" => value
  }
  manifest = each.value
}

Upvotes: 1

smiletrl
smiletrl

Reputation: 396

In case for the remote url hosting the yaml file, and the yaml file has included multiple configs/objects, here's what can be done.

resource "null_resource" "controller_rancher_installation" {

  provisioner "local-exec" {
    command = <<EOT
      echo "Downloading rancher config"
      curl -L https://some-url.yaml -o rancherconfig.yaml
    EOT
  }
}

data "kubectl_path_documents" "rancher_manifests" {
    pattern = "./rancherconfig.yaml"
    depends_on = [null_resource.controller_rancher_installation]
}

resource "kubectl_manifest" "spot_cluster_controller" {
    count     = length(data.kubectl_path_documents.spot_controller_manifests.documents)
    yaml_body = element(data.kubectl_path_documents.spot_controller_manifests.documents, count.index)
}

The idea is to download it firstly, and then apply it. This is based on observation:

  1. pattern = "./rancherconfig.yaml" doesn't support remote url, only local files.
  2. "kubectl_manifest" by default only applies the first config/object in the yaml file.

Upvotes: 0

david_g
david_g

Reputation: 789

You can use the Terraform kubectl third party provider. Follow the installation instructions here: Kubectl Terraform Provider

Then simply define a kubectl_manifest pointing to your YAML file like:

# Get your cluster-info
data "google_container_cluster" "my_cluster" {
  name     = "my-cluster"
  location = "us-east1-a"
}

# Same parameters as kubernetes provider
provider "kubectl" {
  load_config_file       = false
  host                   = "https://${data.google_container_cluster.my_cluster.endpoint}"
  token                  = "${data.google_container_cluster.my_cluster.access_token}"
  cluster_ca_certificate = "${base64decode(data.google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
}

resource "kubectl_manifest" "my_service" {
    yaml_body = file("${path.module}/my_service.yaml")
}

This approach has the big advantage that everything is obtained dynamically and does not rely on any local config file (very important if you run Terraform in a CI/CD server or to manage a multicluster environment).

Multi-object manifest files

The kubectl provider also offers data sources that help to handle files with multiple objest very easily. From the docs kubectl_filename_list:

data "kubectl_filename_list" "manifests" {
    pattern = "./manifests/*.yaml"
}

resource "kubectl_manifest" "test" {
    count = length(data.kubectl_filename_list.manifests.matches)
    yaml_body = file(element(data.kubectl_filename_list.manifests.matches, count.index))
}

Extra points: You can templatize your yaml files. I interpolate the cluster name in the multi-resource autoscaler yaml file as follows:

resource "kubectl_manifest" "autoscaler" {
  yaml_body = templatefile("${path.module}/autoscaler.yaml", {cluster_name = var.cluster_name })
}

Upvotes: 57

Phillip Fleischer
Phillip Fleischer

Reputation: 1103

The answers here are great. One suggestion as your requirements evolve from the initial manifest, you may want to look at creating a helm chart from the the manifest (or maybe one already exists) and use the terraform helm provider instead to set the values for your environment.

https://tech.paulcz.net/blog/getting-started-with-helm/

You'll notice an advantage to helm provider is that is easy to override and manage changes in values by terraform environment instead of embedding it into the manifest.

https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release

Upvotes: 0

Ricardo Jover
Ricardo Jover

Reputation: 149

There are a couple of ways to achieve what you want to do.

You can use the Terraform resources template_file and null_resource.
Notice that I'm using the trigger to run the kubectl command always you modify the template (you may want to replace create with apply).

data "template_file" "your_template" {
  template = "${file("${path.module}/templates/<.yaml>")}"
}

resource "null_resource" "your_deployment" {
  triggers = {
    manifest_sha1 = "${sha1("${data.template_file.your_template.rendered}")}"
  }

  provisioner "local-exec" {
    command = "kubectl create -f -<<EOF\n${data.template_file.your_template.rendered}\nEOF"
  }
}

But maybe the best way is to use the Kubernetes provider.
There are two ways to configure it:

  • By default your manifests will be deployed in your current context ( kubectl config current-context )
  • The second way is to statically define TLS certificate credentials:
provider "kubernetes" {
  host = "https://104.196.242.174"

  client_certificate     = "${file("~/.kube/client-cert.pem")}"
  client_key             = "${file("~/.kube/client-key.pem")}"
  cluster_ca_certificate = "${file("~/.kube/cluster-ca-cert.pem")}"
}

Once done that, you can create your own deployment pretty easy. For a basic pod, it'd be something as easy as:

resource "kubernetes_pod" "hello_world" {
  metadata {
    name = "hello-world"
  }

  spec {
    container {
      image = "my_account/hello-world:1.0.0"
      name  = "hello-world"
    }

    image_pull_secrets  {
      name = "docker-hub"
    }
  }
}

Upvotes: 14

AnmolNagpal
AnmolNagpal

Reputation: 415

You can use terraform local-exec to do this.

   resource "aws_instance" "web" {
     # ...
     provisioner "local-exec" {
      command = "echo ${aws_instance.web.private_ip} >> private_ips.txt"
     }
   }

Ref: https://www.terraform.io/docs/provisioners/local-exec.html

Upvotes: -1

Quentin Revel
Quentin Revel

Reputation: 1478

The best way would be to use the Kubernetes provider of Terraform

Upvotes: -12

Related Questions