anonymous user
anonymous user

Reputation: 317

How to add dns record based on the worker node ip in kubernetes

I have a kubernetes cluster which has two worker nodes. I have pointed the coredns to forward any DNS requests that matches ".com" domain to a remote server.

.com:53 {
      forward . <remote machine IP>
    }

Let's say, pod-0 sits in worker-0 and pod-1 sits in worker-1.

When I uninstall pods and reinstall it, there are chances that pods gets assigned to different worker nodes.

Is there a possibility coredns will resolve the pod hostname to its worker node IP?

It would be really helpful if someone has an approach to handle this issue. Thanks in advance!

Upvotes: 1

Views: 513

Answers (2)

Amila Senadheera
Amila Senadheera

Reputation: 13215

Have you tried using Node Affinity. You can schedule a given pod always to the same Node using node labels. Simply you can use kubernetes.io/hostname label key to select the Node as below:

First Pod

apiVersion: v1
kind: Pod
metadata:
  name: pod-0
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - worker1 
  hostname: pod-0.test.com
  containers:
    ...
    ...

Second Pod

apiVersion: v1
kind: Pod
metadata:
  name: pod-1
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - worker2
  hostname: pod-1.test.com
  containers:
    ...
    ...

Upvotes: 0

Kranthiveer Dontineni
Kranthiveer Dontineni

Reputation: 1533

there is a work around for this issue you can use node selectors for deploying your pods on the same node. If you don’t want to do it in this way, if you are implementing this via a pipeline you can add a few steps to your pipeline for making the entries the flow goes as below.

Trigger CI/CD pipeline → Pod getting deployed → execute kubectl command for getting pods on each node → ssh into the remote machine give sudo privileges if required & change the required config files.

Use the below command to get details of pods running on a particular node

kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=<node>

Upvotes: 0

Related Questions