user155813
user155813

Reputation: 154

terraform failes to create eks_node group

resource "aws_eks_node_group" "n-cluster-group" {
  cluster_name    = aws_eks_cluster.n-cluster.name
  node_group_name = "n-cluster-group"
  node_role_arn   = aws_iam_role.eks-nodegroup.arn
  subnet_ids      = [aws_subnet.public.id, aws_subnet.public2.id]

  scaling_config {
    desired_size = 3
    max_size = 6
    min_size = 1
  }

  launch_template {
    id      = aws_launch_template.n-cluster.id
    version = aws_launch_template.n-cluster.latest_version
  }

  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
  ]

resource "aws_launch_template" "n-cluster" {
  image_id             = "ami-0d45236a5972906dd"
  instance_type        = "t3.medium"
  name_prefix          = "cluster-node-"

  block_device_mappings {
    device_name = "/dev/sda1"

    ebs {
      volume_size = 20
    }
  }

Although instances appear to successfully createthe node group status is CREATE_FAILED terraform reports this as well.

  1. I am wondering what CREATE_FAILED means

  2. what am I dooing wrong? when using a launch group and an eks optomized AMI should I still specify user_data and if so what is the correct way to do this using terraform.

Upvotes: 2

Views: 2465

Answers (2)

Poh Peng Ric
Poh Peng Ric

Reputation: 121

I managed to solve the issue with the following configurations:


resource "aws_launch_template" "eks_launch_template" {
  name = "eks_launch_template"

  block_device_mappings {
    device_name = "/dev/xvda"

    ebs {
      volume_size = 20
      volume_type = "gp2"
    }
  }

  image_id = <custom_ami_id>
  instance_type = "t3.medium"
  user_data = filebase64("${path.module}/eks-user-data.sh")

  tag_specifications {
    resource_type = "instance"

    tags = {
      Name = "EKS-MANAGED-NODE"
    }
  }
}

resource "aws_eks_node_group" "eks-cluster-ng" {
  cluster_name    = aws_eks_cluster.eks-cluster.name
  node_group_name = "eks-cluster-ng-"
  node_role_arn   = aws_iam_role.eks-cluster-ng.arn
  subnet_ids      = [var.network_subnets.pvt[0].id, var.network_subnets.pvt[1].id, var.network_subnets.pvt[2].id]
  scaling_config {
    desired_size = var.asg_desired_size
    max_size     = var.asg_max_size
    min_size     = var.asg_min_size
  }

  launch_template {
    name = aws_launch_template.eks_launch_template.name
    version = aws_launch_template.eks_launch_template.latest_version
  }

  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
  ]
}

The key lies with user_data = filebase64("${path.module}/eks-user-data.sh")

The eks-user-data.sh file should be something like this:

MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash
/etc/eks/bootstrap.sh <cluster-name>

--==MYBOUNDARY==--\

I have tested the above and it works as intended. Thanks all for leading me to this solution

Upvotes: 1

T.H.
T.H.

Reputation: 859

Adding this to your launch template definition resolves it:

user_data = base64encode(<<-EOF
#!/bin/bash -xe
/etc/eks/bootstrap.sh CLUSTER_NAME_HERE
EOF
)

I guess even a EKS optimised AMI counts as a custom AMI if used via launch template.

Upvotes: 0

Related Questions