Taylor Turner
Taylor Turner

Reputation: 198

How to use Amazon EFS with EKS in Terraform

So far I have 2 directories:

aws/ k8s/

Inside aws/ are .tf files describing a VPC, networking, security groups, IAM roles, EKS cluster, EKS node group, and a few EFS mounts. These are all using the AWS provider, the state in stored in S3.

Then in k8s/ I'm then using the Kubernetes provider and creating Kubernetes resources inside the EKS cluster I created. This state is stored in the same S3 bucket in a different state file.

I'm having trouble figuring out how to mount the EFS mounts as Persistent Volumes to my pods.

I've found docs describing using an efs-provisioner pod to do this. See How do I use EFS with EKS?.

In more recent EKS docs they now say to use Amazon EFS CSI Driver. The first step is to do a kubectl apply of the following file.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
images:
- name: amazon/aws-efs-csi-driver
  newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/aws-efs-csi-driver
  newTag: v0.2.0
- name: quay.io/k8scsi/livenessprobe
  newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-liveness-probe
  newTag: v1.1.0
- name: quay.io/k8scsi/csi-node-driver-registrar
  newName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/csi-node-driver-registrar
  newTag: v1.1.0

Does anyone know how I would do this in Terraform? Or how in general to mount EFS file shares as PVs to an EKS cluster?

Upvotes: 5

Views: 15040

Answers (3)

libcthorne
libcthorne

Reputation: 417

This part of Taylor's answer can be automated if you're able to assume kubectl is installed:

After the EKS cluster is created I had to manually install the EFS CSI driver into the cluster before continuing."

# https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html
resource "null_resource" "install_efs_csi_driver" {
  depends_on = [module.eks.aws_eks_cluster]
  provisioner "local-exec" {
    command = format("kubectl --kubeconfig %s apply -k 'github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.1'", module.eks.kubeconfig_filename)
  }
}

Upvotes: 2

Taylor Turner
Taylor Turner

Reputation: 198

@BMW had it right, I was able to get this all into Terraform.

In the aws/ directory I created all my AWS resources, VPC, EKS, workers, etc. and EFS mounts.

resource "aws_efs_file_system" "example" {
  creation_token = "${var.cluster-name}-example"

  tags = {
    Name = "${var.cluster-name}-example"
  }
}

resource "aws_efs_mount_target" "example" {
  count = 2
  file_system_id = aws_efs_file_system.example.id
  subnet_id = aws_subnet.this.*.id[count.index]
  security_groups = [aws_security_group.eks-cluster.id]
}

I also export the EFS file system IDs from the AWS provider plan.

output "efs_example_fsid" {
  value = aws_efs_file_system.example.id
}

After the EKS cluster is created I had to manually install the EFS CSI driver into the cluster before continuing.

Then in the k8s/ directory I reference the aws/ state file so I can use the EFS file system IDs in the PV creation.

data "terraform_remote_state" "remote" {
  backend = "s3"
  config = {
    bucket = "example-s3-terraform"
    key    = "aws-provider.tfstate"
    region = "us-east-1"
  }
}

Then created the Persistent Volumes using the Kubernetes provider.

resource "kubernetes_persistent_volume" "example" {
  metadata {
    name = "example-efs-pv"
  }
  spec {
    storage_class_name = "efs-sc"
    persistent_volume_reclaim_policy = "Retain"
    capacity = {
      storage = "2Gi"
    }
    access_modes = ["ReadWriteMany"]
    persistent_volume_source {
      nfs {
        path = "/"
        server = data.terraform_remote_state.remote.outputs.efs_example_fsid
      }
    }
  }
}

Upvotes: 6

BMW
BMW

Reputation: 45223

Here is my understanding for your question.

First you need use terraform to crate EFS

resource "aws_efs_file_system" "foo" {
  creation_token = "my-product"

  tags = {
    Name = "MyProduct"
  }
}

resource "aws_efs_mount_target" "alpha" {
  file_system_id = "${aws_efs_file_system.foo.id}"
  subnet_id      = "${aws_subnet.alpha.id}" # depend on how you set the vpc with terraform
}

After that, you need record the efs id, for example, fs-582a03f3

Then add new csi driver for EFS and set the persistent volume, these are done in kubernetes with kubectl directly, helm charts, kustomize or you can do with terraform kubernetes provider by aws_efs_file_system.foo.id (https://www.terraform.io/docs/providers/kubernetes/index.html)

---
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: efs.csi.aws.com
spec:
  attachRequired: false

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-582a03f3

Upvotes: 1

Related Questions