Urr4
Urr4

Reputation: 793

Kubernetes Service unreachable

I have created a Kubernetes cluster on 2 Rasberry Pis (Model 3 and 3B+) to use as a Kubernetes playground.

I have deployed a postgresql and an spring boot app (called meal-planer) to play around with. The meal-planer should read and write data from and to the postgresql.

However, the app can't reach the Database.

Here is the deployment-descriptor of the postgresql:

kind: Service
apiVersion: v1
metadata:
  name: postgres
  namespace: home
  labels:
    app: postgres
spec:
  selector:
    app: postgres
  ports:
  - port: 5432
    targetPort: 5432
    name: postgres
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: postgres
  namespace: home
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:13.2
          imagePullPolicy: IfNotPresent
          env:
            - name: POSTGRES_USER
              valueFrom:
                secretKeyRef:
                  name: dev-db-secret
                  key: username
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: dev-db-secret
                  key: password
            - name: POSTGRES_DB
              value: home
          ports:
            - containerPort: 5432
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgres-data
      volumes:
        - name: postgres-data
          persistentVolumeClaim:
            claimName: postgres-pv-claim
---

Here is the deployments-descriptor of the meal-planer

kind: Service
apiVersion: v1
metadata:
  name: meal-planner
  namespace: home
  labels:
    app: meal-planner
spec:
  type: ClusterIP
  selector:
    app: meal-planner
  ports:
    - port: 8080
      name: meal-planner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: meal-planner
  namespace: home
spec:
  replicas: 1
  selector:
    matchLabels:
      app: meal-planner
  template:
    metadata:
      labels:
        app: meal-planner
    spec:
      containers:
        - name: meal-planner
          image: 08021986/meal-planner:v1
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
---

The meal-planer image is an arm32v7 image running a jar file. Inside the cluster, the meal-planer uses the connection-string jdbc:postgresql://postgres:5432/home to connect to the DB.

I am absolutely sure, that the DB-credentials are correct, since i can access the DB when i port-forward the service.

When deploying both applications, I can kubectl exec -it <<podname>> -n home -- bin/sh into it. If I call wget -O- postgres or wget -O- postgres.home from there, I always get Connecting to postgres (postgres)|10.43.62.32|:80... failed: Network is unreachable.

I don't know, why the network is unreachable and I don't know what I can do about it.

Upvotes: 2

Views: 1413

Answers (1)

Raoslaw Szamszur
Raoslaw Szamszur

Reputation: 1740

First of all, don't use Deployment workloads for applications that require saving the state. This could get you into some trouble and even data loss. For that purpose, you should use statefulset

StatefulSet is the workload API object used to manage stateful applications.

Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.

Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.

Also for databases, the storage should be as close to the engine as possible (due to latency) most preferably hostpath storageClass with ReadWriteOnce.

Now regarding your issue, my guess is it's either the problem with how you connect to DB in your application or maybe the remote connection is refused by definitions in pg_hba.conf

Here is a minimal working example that'll help you get started:

kind: Namespace
apiVersion: v1
metadata:
  name: test
  labels:
    name: test
---
kind: Service
apiVersion: v1
metadata:
  name: postgres-so-test
  namespace: test
  labels:
    app: postgres-so-test
spec:
  selector:
    app: postgres-so-test
  ports:
  - port: 5432
    targetPort: 5432
    name: postgres-so-test
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  namespace: test
  name: postgres-so-test
spec:
  replicas: 1
  serviceName: postgres-so-test
  selector:
    matchLabels:
      app: postgres-so-test
  template:
    metadata:
      labels:
        app: postgres-so-test
    spec:
      containers:
        - name: postgres
          image: postgres:13.2
          imagePullPolicy: IfNotPresent
          env:
            - name: POSTGRES_USER
              value: johndoe
            - name: POSTGRES_PASSWORD
              value: thisisntthepasswordyourelokingfor
            - name: POSTGRES_DB
              value: home
          ports:
            - containerPort: 5432

Now let's test this. NOTE: I'll also create a deployment from Postgres image just to have a pod in this namespace which will have pg_isready binary in order to test the connection to created db.

pi@rsdev-pi-master:~/test $ kubectl apply -f test_db.yml 
namespace/test created
service/postgres-so-test created
statefulset.apps/postgres-so-test created
pi@rsdev-pi-master:~/test $ kubectl apply -f test_container.yml 
deployment.apps/test-container created
pi@rsdev-pi-master:~/test $ kubectl get pods -n test
NAME                             READY   STATUS    RESTARTS   AGE
postgres-so-test-0               1/1     Running   0          19s
test-container-d77d75d78-cgjhc   1/1     Running   0          12s
pi@rsdev-pi-master:~/test $ sudo kubectl get all -n test
NAME                                 READY   STATUS    RESTARTS   AGE
pod/postgres-so-test-0               1/1     Running   0          26s
pod/test-container-d77d75d78-cgjhc   1/1     Running   0          19s

NAME                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/postgres-so-test   ClusterIP   10.43.242.51   <none>        5432/TCP   30s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/test-container   1/1     1            1           19s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/test-container-d77d75d78   1         1         1       19s

NAME                                READY   AGE
statefulset.apps/postgres-so-test   1/1     27s

pi@rsdev-pi-master:~/test $ kubectl exec -it test-container-d77d75d78-cgjhc -n test -- /bin/bash
root@test-container-d77d75d78-cgjhc:/# pg_isready -d home -h postgres-so-test -p 5432 -U johndoe
postgres-so-test:5432 - accepting connections

If you'll still have trouble connecting to DB, please attach following:

  1. kubectl describe pod <<postgres_pod_name>>
  2. kubectl logs <<postgres_pod_name>> Idealy afrer you've tried to connect to it
  3. kubectl exec -it <<postgres_pod_name>> -- cat /var/lib/postgresql/data/pg_hba.conf

Also research topic of K8s operators. They are useful for deploying more complex production-ready application stacks (Ex. Database with master + replicas + LB)

Upvotes: 2

Related Questions