borchero
borchero

Reputation: 6012

How to Add Internal DNS Records in Kubernetes

I'm currently setting up a Kubernetes cluster where both private and public services are run. While public services should be accessible via the internet (and FQDNs), private services should not (the idea is to run a VPN inside the cluster where private services should be accessible via simple FQDNs).

At the moment, I'm using nginx-ingress and configure Ingress resources where I set the hostname for public resources. external-dns then adds the corresponding DNS records (in Google CloudDNS) - this already works.

The problem I'm facing now: I'm unsure about how I can add DNS records in the same way (i.e. simply specifying a host in Ingress definitions and using some ingress-class private), yet have these DNS records only be accessible from within the cluster.

I was under the impression that I can add these records to the Corefile that CoreDNS is using. However, I fail to figure out how this can be automated.

Thank you for any help!

Upvotes: 1

Views: 5502

Answers (5)

Junaid
Junaid

Reputation: 3955

If you have an internal DNS server that can resolve the FQDNs, then you can configure the Corefile to forward internal service domain resolution to that DNS server.

For example, if the internal domains/FQDN is *.mycompany.local, the Corefile could have a section for that:

mycompany.local {
        log
        errors
        ready
        cache 10
        forward . <internal DNS server IP> {
        }

}

All the requests to app.mycompany.local, or frontend.middleware.backend.mycompany.local will be forward to your internal DNS for resolution.

Documentation of forward plugin is available here: https://coredns.io/plugins/forward/

Upvotes: 1

Piotr
Piotr

Reputation: 406

Kubernetes has build-in DNS and each service receives internal fqdn. These services are not available from the outside unless

  • service type is 'LoadBalancer'
  • you define ingress for that service (assuming you have ingress controler like nginx already deployed)

So your sample service deployed in 'default' namespace is accessible inside cluster out of the box via service1.default.svc.cluster.local

You can change the name by specifying custom ExternalName

apiVersion: v1
kind: Service
metadata:
  name: service1
  namespace: prod
spec:
  type: ExternalName
  externalName: service1.database.example.com

Note that no proxying is done for this to work, you need to make sure given new name is routable from within your cluster (outbound connections are allowed, etc.)

Upvotes: 1

InsOp
InsOp

Reputation: 2699

As your k8s cluster is hosted with gcloud you can try to use Cloud DNS. There you can add a private zone with your DNS name.

Then you can push this dns server to your client in your vpn configuration with:

 push "dhcp-option DOMAIN gitlab.internal.example.com"
 push "dhcp-option DNS 169.254.169.254"

169.254.169.254 is googles dns, only accessible from inside a google private network

Upvotes: 0

borchero
borchero

Reputation: 6012

I managed to resolve the problem myself... wrote a little Go application which watches Ingress resources and adds rewrite rules to the Corefile read by CoreDNS accordingly... works like a charm :)

PS: If anyone wants to use the tool, let me know. I'm happy to make it open-source if there is any demand.

Upvotes: 1

Markus Dresch
Markus Dresch

Reputation: 5574

If you don't want them to be accessed publicly, you don't want to add ingress rules for them. Ingress is only to route external traffic into your cluster.

All your services are already registered in CoreDNS and accessible with their local name, no need to add anything else.

Upvotes: 2

Related Questions