user2739823
user2739823

Reputation: 407

Unable to connect to Cockroach pod in Kubernetes

I am developing a simple web app with web service and persistent layer. Web persistent layer is Cockroach db. I am trying to deploy my app with single command:

kubectl apply -f my-app.yaml

App is deployed successfully. However when backend has to store something in db the following error appears:

dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host

When I start my app I provide the following connection string to cockroach db and connection is successful but when I try to store something in db the above error appears:

postgresql://root@web-service-db:26257/defaultdb?sslmode=disable

For some reason web pod can not talk with db pod. My whole configuration is:


# Service for web application
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
 app: web-service
type: NodePort
ports:
 - protocol: TCP
   port: 8080
   targetPort: http
   nodePort: 30103
externalIPs:
 - 192.168.1.9    # < - my local ip
---

# Deployment of web app
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
selector:
 matchLabels:
   app: web-service
replicas: 1
template:
 metadata:
   labels:
     app: web-service
 spec:
   hostNetwork: true
   containers:
     - name: web-service
       image: my-local-img:latest
       imagePullPolicy: IfNotPresent
       ports:
         - name: http
           containerPort: 8080
           hostPort: 8080
       env:
         - name: DB_CONNECT_STRING
           value: "postgresql://root@web-service-db:26257/defaultdb?sslmode=disable"

---
### Kubernetes official doc PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: cockroach-pv-volume
labels:
 type: local
spec:
storageClassName: manual
capacity:
 storage: 10Gi
accessModes:
 - ReadWriteOnce
hostPath:
 path: "/tmp/my-local-volueme"

---
### Kubernetes official doc PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cockroach-pv-claim
spec:
storageClassName: manual
accessModes:
 - ReadWriteOnce
resources:
 requests:
   storage: 4Gi
---

# Cockroach used by web-service
apiVersion: v1
kind: Service
metadata:
name: web-service-cockroach
labels:
 app: web-service-cockroach
spec:
selector:
 app: web-service-cockroach
type: NodePort
ports:
 - protocol: TCP
   port: 26257
   targetPort: 26257
   nodePort: 30104
---

# Cockroach stateful set used to deploy locally
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web-service-cockroach
spec:
serviceName: web-service-cockroach
replicas: 1
selector:
 matchLabels:
   app: web-service-cockroach
template:
 metadata:
   labels:
     app: web-service-cockroach
 spec:
   volumes:
     - name: cockroach-pv-storage
       persistentVolumeClaim:
         claimName: cockroach-pv-claim
   containers:
     - name: web-service-cockroach
       image: cockroachdb/cockroach:latest
       command:
         - /cockroach/cockroach.sh
         - start
         - --insecure
       volumeMounts:
         - mountPath: "/tmp/my-local-volume"
           name: cockroach-pv-storage
       ports:
         - containerPort: 26257

After deployment everything looks good.

kubectl get service
NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)           AGE
kubernetes               ClusterIP   10.96.0.1      <none>        443/TCP           50m
web-service              NodePort    10.111.85.64   192.168.1.9   8080:30103/TCP    6m17s
webs-service-cockroach   NodePort    10.96.42.121   <none>        26257:30104/TCP   6m8s
kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
web-service-6cc74b5f54-jlvd6   1/1     Running   0          24m
web-service-cockroach-0        1/1     Running   0          24m

Thanks in advance!

Upvotes: 1

Views: 444

Answers (1)

Matt
Matt

Reputation: 8152

Looks like you have a problem with DNS.

dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host

Address 192.168.65.1 does not like a kube-dns service ip.

This could be explaind if you where using host network, and surprisingly you do. When using hostNetwork: true, the default dns server used is the server that the host uses and that never is a kube-dns.


To solve it set:

spec:
  dnsPolicy: ClusterFirstWithHostNet

It sets the dns server to the k8s one for the pod.

Have a look at kubernetes documentaion for more information about Pod's DNS Policy.

Upvotes: 1

Related Questions