eirikir
eirikir

Reputation: 3842

Use Terraform output in Kubernetes

I'm using a slightly customized Terraform configuration to generate my Kubernetes cluster on AWS. The configuration includes an EFS instance attached to the cluster nodes and master. In order for Kubernetes to use this EFS instance for volumes, my Kubernetes YAML needs the id and endpoint/domain of the EFS instance generated by Terraform.

Currently, my Terraform outputs the EFS id and DNS name, and I need to manually edit my Kubernetes YAML with these values after terraform apply and before I kubectl apply the YAML.

How can I automate passing these Terraform output values to Kubernetes?

Upvotes: 2

Views: 2322

Answers (1)

Rutger de Knijf
Rutger de Knijf

Reputation: 1182

I don't know what you mean by a yaml to set up an Kubernetes cluster in AWS. But then, I've always set up my AWS clusters using kops. Additionally I don't understand why you would want to mount an EFS to the master and/or nodes instead of to the containers.

But in direct answer to your question: you could write a script to output your Terraform outputs to a Helm values file and use that to generate the k8s config.

I stumbled upon this question when searching for a way to get TF outputs to envvars specified in Kubernetes and I expect more people do. I also suspect that that was really your question as well or at least that it can be a way to solve your problem. So:

You can use the Kubernetes Terraform provider to connect to your cluster and then use the kubernetes_config_map resources to create configmaps.

provider "kubernetes" {}

resource "kubernetes_config_map" "efs_configmap" {
  "metadata" {
    name = "efs_config"  // this will be the name of your configmap
  }

  data {
    efs_id = "${aws_efs_mount_target.efs_mt.0.id}"
    efs_dns = "${aws_efs_mount_target.efs_mt.0.dns_name}"
  }
}

If you have secret parameters use the kubernetes_secret resource:

resource "kubernetes_secret" "some_secrets" {
  "metadata" {
    name = "some_secrets"
  }

  data {
    s3_iam_access_secret = "${aws_iam_access_key.someresourcename.secret}"
    rds_password = "${aws_db_instance.someresourcename.password}"
  }
}    

You can then consume these in your k8s yaml when setting your environment:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: some-app-deployment
spec:
  selector:
    matchLabels:
        app: some
  template:
    metadata:
      labels:
        app: some
    spec:
      containers:
        - name: some-app-container
          image: some-app-image
          env:
            - name: EFS_ID
              valueFrom:
                configMapKeyRef:
                  name: efs_config
                  key: efs_id
            - name: RDS_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: some_secrets
                  key: rds_password

Upvotes: 2

Related Questions