TFaws
TFaws

Reputation: 273

Error kube-system/configmaps: dial tcp 127.0.0.1:80: connect: connection refused

I'm trying to deploy a cluster with self managed node groups. No matter what config options I use, I always come up with the following error:

Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80: connect: connection refused with module.eks-ssp.kubernetes_config_map.aws_auth[0]on .terraform/modules/eks-ssp/aws-auth-configmap.tf line 19, in resource "kubernetes_config_map" "aws_auth":resource "kubernetes_config_map" "aws_auth" {

The .tf file looks like this:

module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"

# EKS CLUSTER
tenant            = "DevOpsLabs2"
environment       = "dev-test"
zone              = ""
terraform_version = "Terraform v1.1.4"

# EKS Cluster VPC and Subnet mandatory config
vpc_id             = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]

# EKS CONTROL PLANE VARIABLES
create_eks         = true
kubernetes_version = "1.19"

# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name        = "DevOpsLabs2"
subnet_ids             = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os     = "bottlerocket"       # amazonlinux2eks  or bottlerocket or windows
custom_ami_id          = "xxx"
public_ip              = true                   # Enable only for public subnets
pre_userdata           = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT

disk_size     = 20
instance_type = "t2.small"
desired_size  = 2
max_size      = 10
min_size      = 2
capacity_type = "" # Optional Use this only for SPOT capacity as  capacity_type = "spot"

k8s_labels = {
Environment = "dev-test"
Zone        = ""
WorkerType  = "SELF_MANAGED_ON_DEMAND"
}

additional_tags = {
ExtraTag    = "t2x-on-demand"
Name        = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}

module "eks-ssp-kubernetes-addons" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons"

eks_cluster_id                        = module.eks-ssp.eks_cluster_id

# EKS Addons
enable_amazon_eks_vpc_cni             = true
enable_amazon_eks_coredns             = true
enable_amazon_eks_kube_proxy          = true
enable_amazon_eks_aws_ebs_csi_driver  = true

#K8s Add-ons
enable_aws_load_balancer_controller   = true
enable_metrics_server                 = true
enable_cluster_autoscaler             = true
enable_aws_for_fluentbit              = true
enable_argocd                         = true
enable_ingress_nginx                  = true

depends_on = [module.eks-ssp.self_managed_node_groups]
}

Providers:

terraform {

  backend "remote" {}

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.66.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.6.1"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.4.1"
    }
  }
}

Upvotes: 5

Views: 23646

Answers (5)

Rotem jackoby
Rotem jackoby

Reputation: 22128

This issue can happen due to multiple reasons, so I'm adding another solution.

Based on this issue
Error: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp 127.0.0.1:80: connect: connection refused #911 in the terraform-aws-eks module.

Running the following command in Terraform can help to solve the issue:

terragrunt state rm module.eks.module.eks.kubernetes_config_map.aws_auth

Upvotes: 0

Artem Kosenko
Artem Kosenko

Reputation: 1

check an example folder of the eks module on github. You should not use "data" in kubernetes provider configuration - it does not work when you create a resources from the scratch on the very first time. Provider configuration must looks like this instead:

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
  }
}

Upvotes: 0

Wajahat Lateef
Wajahat Lateef

Reputation: 77

In my case i was trying to deploy to the kubernetes cluster(GKE) using Terraform. I have replaced the kubeconfig path with the kubeconfig file's absolute path.

From

provider "kubernetes" {
  config_path    = "~/.kube/config"
  #config_context = "my-context"
}

TO

   provider "kubernetes" {
  config_path    = "/Users/<username>/.kube/config"
  #config_context = "my-context"
}

Upvotes: 0

Dan Ehrlich
Dan Ehrlich

Reputation: 101

The above answer from Marko E seems to fix / just ran into this issue. After applying the above code, altogether in a separate providers.tf file, terraform now makes it past the error. Will post later as to whether the deployment makes it fully through.

For reference was able to go from 65 resources created down to 42 resources created before I hit this error. This was using the exact best practice / sample configuration recommended at the top of the README from AWS Consulting here: https://github.com/aws-samples/aws-eks-accelerator-for-terraform

Upvotes: 1

Marko E
Marko E

Reputation: 18138

Based on the example provided in the Github repo [1], my guess is that the provider configuration blocks are missing for this to work as expected. Looking at the code provided in the question, it seems that the following needs to be added:

data "aws_region" "current" {}

data "aws_eks_cluster" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks-ssp.eks_cluster_id
}

provider "aws" {
  region = data.aws_region.current.id
  alias  = "default" # this should match the named profile you used if at all
}

provider "kubernetes" {
  experiments {
    manifest_resource = true
  }
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

If helm is also required, I think the following block [2] needs to be added as well:

provider "helm" {
  kubernetes {
    host                   = data.aws_eks_cluster.cluster.endpoint
    token                  = data.aws_eks_cluster_auth.cluster.token
    cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  }
}

Provider argument reference for kubernetes and helm is in [3] and [4] respectively.


[1] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-self-managed-node-groups/main.tf#L23-L47

[2] https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L49-L55

[3] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference

[4] https://registry.terraform.io/providers/hashicorp/helm/latest/docs#argument-reference

Upvotes: 8

Related Questions