wkhatch
wkhatch

Reputation: 2741

AWS EKS External DNS - route 53 record keeps cycling

I'm deploying an ingress into my eks cluster, and everything deploys without issue, but, the DNS record in Route 53 seems to be continuously deleted and recreated, which results in requests to alternate between completing successfully, or receiving a DNS related error. I am seeking a way to debug this; cloudwatch logs, while copious to an almost overwhelming degree, aren't really helpul, or, I've yet to find the one log group out of the numerous groups related to my cluster that actually indicates something useful. I'm using terraform, and below is the code for the ingress:

# Kubernetes Service Manifest (Type: Network Load Balancer Service)
resource "kubernetes_ingress_v1" "ca_alb_service" {
  metadata {
    name = "ca-alb"
    annotations = {
      # Traffic Routing

      "alb.ingress.kubernetes.io/load-balancer-name" = "ca-alb-${var.environment}"
      # Ingress Core Settings
      "alb.ingress.kubernetes.io/scheme" = "internet-facing"
      # Health Check Settings
      "alb.ingress.kubernetes.io/healthcheck-protocol" = "HTTP"
      "alb.ingress.kubernetes.io/healthcheck-port"     = "traffic-port"
      #Important Note:  Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer    
      "alb.ingress.kubernetes.io/healthcheck-interval-seconds" = 15
      "alb.ingress.kubernetes.io/healthcheck-timeout-seconds"  = 5
      "alb.ingress.kubernetes.io/success-codes"                = 200
      "alb.ingress.kubernetes.io/healthy-threshold-count"      = 2
      "alb.ingress.kubernetes.io/unhealthy-threshold-count"    = 2
      "alb.ingress.kubernetes.io/healthcheck-path"             = "/health"
      "alb.ingress.kubernetes.io/listen-ports"                 = jsonencode([{ "HTTPS" = 443 }, { "HTTP" = 80 }])

      "alb.ingress.kubernetes.io/certificate-arn" = "${data.terraform_remote_state.hub.outputs.domain_certificate_arn}"
      # SSL Redirect Setting
      "alb.ingress.kubernetes.io/ssl-redirect" = 443


      # AWS Resource Tags
      "service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags" = "Environment=${var.environment},Team=dev,Name=caalb-${var.environment}"
      "external-dns.alpha.kubernetes.io/hostname" : "${lookup(var.subdomain_for_environment, var.environment)}.mydomain.io"
    }
  }
  spec {
    ingress_class_name = "ingress-controller-class" # Ingress Class, this is the default for all clusters, so we could exclude this argument
    default_backend {
      service {
        name = kubernetes_service_v1.ca-as-np.metadata[0].name
        port {
          number = 3000
        }
      }
    }
  }
}

Upvotes: 1

Views: 346

Answers (1)

mchavezi
mchavezi

Reputation: 580

You can enable debug mode for external-dns by typing kubectl edit deployment external-dns -n default and setting log-level to debug. You can then see verbose logs for external-dns by typing kubectl logs deployment/external-dns -f.

To solve your problem, external-dns might be adding extra txt records to your route 53. You don't need those, and when external-dns does its checks it will delete and recreate them. If that is the case, try deleting the TXT records.

Upvotes: 0

Related Questions