williamcodes
williamcodes

Reputation: 7216

How to configure EKS ALB with Terraform

I'm having a hard time getting EKS to expose an IP address to the public internet. Do I need to set up the ALB myself or do you get that for free as part of the EKS cluster? If I have to do it myself, do I need to define it in the terraform template file or in the kubernetes object yaml?

Here's my EKS cluster defined in Terraform along with what I think are the required permissions.

// eks.tf

resource "aws_iam_role" "eks_cluster_role" {
  name = "${local.env_name}-eks-cluster-role"
  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Principal = {
          Service = "eks.amazonaws.com"
        },
        Action = "sts:AssumeRole"
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "eks-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.eks_cluster_role.name
}

resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
  role       = aws_iam_role.eks_cluster_role.name
}

resource "aws_kms_key" "eks_key" {
  description             = "EKS KMS Key"
  deletion_window_in_days = 7
  enable_key_rotation     = true

  tags = {
    Environment = local.env_name
    Service     = "EKS"
  }
}

resource "aws_kms_alias" "eks_key_alias" {
  target_key_id = aws_kms_key.eks_key.id
  name          = "alias/eks-kms-key-${local.env_name}"
}

resource "aws_eks_cluster" "eks_cluster" {
  name                      = "${local.env_name}-eks-cluster"
  role_arn                  = aws_iam_role.eks_cluster_role.arn
  enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]

  vpc_config {
    subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
  }

  encryption_config {
    resources = ["secrets"]

    provider {
      key_arn = aws_kms_key.eks_key.arn
    }
  }

  tags = {
    Environment = local.env_name
  }
}

resource "aws_iam_role" "eks_node_group_role" {
  name = "${local.env_name}-eks-node-group"
  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Principal = {
          Service = "ec2.amazonaws.com"
        },
        Action = "sts:AssumeRole"
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.eks_node_group_role.name
}

resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks_node_group_role.name
}

resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks_node_group_role.name
}

resource "aws_eks_node_group" "eks_node_group" {
  instance_types  = var.node_group_instance_types
  node_group_name = "${local.env_name}-eks-node-group"
  node_role_arn   = aws_iam_role.eks_node_group_role.arn
  cluster_name    = aws_eks_cluster.eks_cluster.name
  subnet_ids      = [aws_subnet.private_a.id, aws_subnet.private_b.id]

  scaling_config {
    desired_size = 1
    max_size     = 1
    min_size     = 1
  }

  // Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
  // Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    aws_iam_role_policy_attachment.eks-node-group-AmazonEC2ContainerRegistryReadOnly,
    aws_iam_role_policy_attachment.eks-node-group-AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.eks-node-group-AmazonEKSWorkerNodePolicy,
  ]

And here's my kubernetes object yaml:

# hello-kubernetes.yaml

apiVersion: v1
kind: Service
metadata:
  name: hello-kubernetes
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-kubernetes
  template:
    metadata:
      labels:
        app: hello-kubernetes
    spec:
      containers:
      - name: hello-kubernetes
        image: paulbouwer/hello-kubernetes:1.9
        ports:
        - containerPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: hello-ingress
spec:
  backend:
    serviceName: hello-kubernetes
    servicePort: 80

I've run terraform apply and the cluster is up and running. I've installed eksctl and kubectl and run kubectl apply -f hello-kubernetes.yaml. The pods, service, and ingress appear to be running fine.

$ kubectl get pods
NAME                                READY   STATUS             RESTARTS   AGE
hello-kubernetes-6cb7cd595b-25bd9   1/1     Running            0          6h13m
hello-kubernetes-6cb7cd595b-lccdj   1/1     Running            0          6h13m
hello-kubernetes-6cb7cd595b-snwvr   1/1     Running            0          6h13m

$ kubectl get services
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
hello-kubernetes   LoadBalancer   172.20.102.37   <pending>     80:32086/TCP   6h15m

$ kubectl get ingresses
NAME            CLASS    HOSTS   ADDRESS   PORTS   AGE
hello-ingress   <none>   *                 80      3h45m

What am I missing and which file does it belong in?

Upvotes: 2

Views: 12682

Answers (3)

Jonas
Jonas

Reputation: 128777

You need to install the AWS Load Balancer Controller by following the installation instructions; first you need to create IAM Role and permissions, this can be done with Terraform; then you need to apply Kubernetes Yaml for installing the controller into your cluster, this can be done with Helm or Kubectl.

You also need to be aware of the subnet tagging that is needed for e.g. creating a public or private facing load balancer.

Upvotes: 1

Gautam Rajotya
Gautam Rajotya

Reputation: 176

This happened with me too that after all the setup, I was not able to see the ingress address. The best way to debug this issue is to check the logs for the ingress controller. You can do this by:

Get the Ingress controller po name by using: kubectl get po -n kube-system Check logs for the po using: kubectl logs <po_name> -n kube-system This will point you to the exact issue as to why you are not seeing the address.

If you do not find any po running by the name ingress, then u will have to create ingress controller first.

Upvotes: 0

camionegra
camionegra

Reputation: 46

Usually the way to go is to put an ALB and redirect traffic to the EKS cluster, managing it with the ALB Ingress Controller. This ingress controller will act as the communication between the cluster and your ALB, here is the AWS documentation that is pretty straight foward

EKS w/ALB

Other solution could be using an NGINX ingress controller with an NLB if the ALB doesn't suits your applications needs, as described in the following article

NGINX w/NLB

Upvotes: 0

Related Questions