Reputation: 9644
I am trying to access a service listening on a port running on every node in my bare metal (Ubuntu 20.04) cluster from inside a pod. I can use the real IP address of one of the nodes and it works. However I need pods to connect to the port on their own node. I cant use '127.0.0.1' inside a pod.
More info: I am trying to wrangle a bunch of existing services into k8s. We use an old version of Consul for service discovery and have it running on every node providing DNS on 8600. I figured out how to edit the coredns Corefile to add a consul { } block so lookups for .consul work.
consul {
errors
cache 30
forward . 157.90.123.123:8600
}
However I need to replace that IP address with the "address of the node the coredns pod is running on".
Any ideas? Or other ways to solve this problem? Tx.
Upvotes: 1
Views: 1101
Reputation: 9644
Comment from @mdaniel worked. Tx.
Edit coredns deployment. Add this to the container after volumeMounts:
env:
- name: K8S_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Edit coredns config map. Add to bottom of the Corefile:
consul {
errors
cache 30
forward . {$K8S_NODE_IP}:8600
}
Check that DNS is working
kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot -- /bin/bash
nslookup myservice.service.consul
nslookup www.google.com
exit
Upvotes: 1