Phocs
Phocs

Reputation: 2510

K8s DNS resolves the Route53 domain internally if the ingress is defined

The following chain describes how Pods that define an API are reached from the outside.

Client -> Route53 (.example.com) 
  -> LoadBalancer -> Nginx 
    -> Service -> Pod

Some pods, in addition to defining an API, communicate and use the API of others in the same cluster k8s. To allow communication between pods I can do it using the internal dns: eg. api1.ns.svc.cluster.local or using the Route53 api1.example.com domain.

The first case is more efficient but on the other hand I need to keep a list of the necessary services and namespaces for each pod.

The second case is easier to manage. I know that each API responds to * .example.com so I only need to know the subdomain to call. This approach is extremely inefficient:

Pod1 -> Route53 (api2.example.com) 
  -> LoadBalancer -> Nginx 
    -> Service -> Pod2

In this scenario I would like to know if there are known solutions for which a pod to communicate with another can use the same domain managed by Route53 but without leaving the cluster and maintaining internal traffic.

I know I can use a core dns rewrite but in that case I should still keep an updated list, also Route53 holds subdomains pointing to services outside the cluster, e.g. db.example.com

So the idea is an autodiscovery of the ingress and keep internal traffic if possible:

Pod1 -> k8sdns with api2.example.com ingress 
  -> Nginx -> Service 
    -> Pod2 

Or

Pod1 -> k8sdns without db.example.com ingress
  -> Route53 -> LoadBalancer 
    -> DB

Thanks

Upvotes: 5

Views: 1800

Answers (1)

mauricubo
mauricubo

Reputation: 331

Yes, you can do it using the CoreDNS rewrite plugin. This is the official documentation and I'll give you an example how to implement it.

  1. Edit the CoreDNS ConfigMap
kubectl edit cm -n kube-system coredns
  1. Add this line inside the config:
rewrite name regex (.*)\.yourdomain\.com {1}.default.svc.cluster.local

Your cm is going to look like:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           upstream
           fallthrough in-addr.arpa ip6.arpa
        }
        rewrite name regex (.*)\.yourdomain\.com {1}.default.svc.cluster.local
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
  1. Save the edit and delete your CoreDNS pods
kubectl delete pod -n kube-system --selector k8s-app=kube-dns
  1. And test it inside a dummy pod or query directly to the CoreDNS
# dig app1.yourdomain.com

; <<>> DiG 9.16.33-Debian <<>> app1.yourdomain.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51020
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: c06f814fbf04a827 (echoed)
;; QUESTION SECTION:
;app1.yourdomain.com.       IN  A

;; ANSWER SECTION:
app1.default.svc.cluster.local. 30 IN   A   10.110.113.195

;; Query time: 5 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Oct 26 04:49:47 UTC 2022
;; MSG SIZE  rcvd: 107

Upvotes: 3

Related Questions