Reputation: 3357
I have a k8 setup that looks like this
ingress -> headless service (k8 service with clusterIp: none) -> statefulsets ( 2pods)
Fqdn looks like this:
nslookup my-service
Server: 100.4.0.10
Address: 100.4.0.10#53
Name: my-service.my-namespace.svc.cluster.local
Address: 100.2.2.8
Name: my-service.my-namespace.svc.cluster.local
Address: 100.1.4.2
I am trying to reach one of the pod directly via the service using the following fqdn but not able to do so.
curl -I my-pod-0.my-service.my-namespace.svc.cluster.local:8222
curl: (6) Could not resolve host: my-pod-0.my-service.my-namespace.svc.cluster.local
If I try to hit the service directly then it works correctly (as a loadbalancer)
curl -I my-service.my-namespace.svc.cluster.local:8222
HTTP/1.1 200 OK
Date: Sat, 31 Jul 2021 21:24:42 GMT
Content-Length: 656
If I try to hit the pod directly using it's cluster ip, it also works fine
curl -I 100.2.2.8:8222
HTTP/1.1 200 OK
Date: Sat, 31 Jul 2021 21:29:22 GMT
Content-Length: 656
Content-Type: text/html; charset=utf-8
But my use case requires me to be able to hit the statefulset pod using fqdn i.e my-pod-0.my-service.my-namespace.svc.cluster.local
. What am I missing here?
Upvotes: 5
Views: 5471
Reputation: 10868
Original answer didn't clarify how OP fixed the issue, the problem was in serviceName
property under statefulset implementation.
Upvotes: 3
Reputation: 18381
example statefulset called foo
with image nginx
:
k get statefulsets.apps
NAME READY AGE
foo 3/3 8m55s
This stateful set created following pods(foo-0,foo-1,foo-2
):
k get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 1 3h47m 10.1.198.71 ps-master <none> <none>
foo-0 1/1 Running 0 12m 10.1.198.121 ps-master <none> <none>
foo-1 1/1 Running 0 12m 10.1.198.77 ps-master <none> <none>
foo-2 1/1 Running 0 12m 10.1.198.111 ps-master <none> <none>
Now create a headless service(clusterIP is none
) as follow:(make sure to use correct selector same as your statefulset)
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: foo
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 80
name: web
selector:
app: foo
Now, do nslookup
to see the dns
resolution working for the service.(Optional step)
k exec -it busybox -- nslookup nginx.default.svc.cluster.local
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
Name: nginx.default.svc.cluster.local
Address 1: 10.1.198.77 foo-1.nginx.default.svc.cluster.local
Address 2: 10.1.198.111 foo-2.nginx.default.svc.cluster.local
Address 3: 10.1.198.121 foo-0.nginx.default.svc.cluster.local
Now validate that, individual resolution per-pod is working:
k exec -it busybox -- nslookup foo-1.nginx.default.svc.cluster.local
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
Name: foo-1.nginx.default.svc.cluster.local
Address 1: 10.1.198.77 foo-1.nginx.default.svc.cluster.local
More info: Here
Note: In this case OP had incorrect mapping of headless
service and the statefulset
, this can be verified with below command:
k get statefulsets.apps foo -o jsonpath="{.spec.serviceName}{'\n'}"
nignx
Ensure that, the mapping.
Upvotes: 1