sontags
sontags

Reputation: 3241

Kubernetes service not working as expected

I am failing to deploy postgres (single node, official image) on kubernetes and allow services to access postgres via ClusterIP service.

The config is rather simple - Namespace, Deployment, Service:

---
apiVersion: v1
kind: Namespace
metadata:
  name: database
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: database
  name: postgres
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:11.1
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
  name: pg
  namespace: database
  labels:
    app: postgres
spec:
  selector:
    app: postgres
  ports:
  - protocol: TCP
    name: postgres
    port: 5432
    targetPort: 5432

To test is executed a "/bin/bash" into the pod and ran a simple psql command to test the connection. All works well locally:

kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- psql -U admin postgresdb -c "\t"
Tuples only is on.

But as soon as I try to access postgres via service, the command fails:

kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- psql -h pg -U admin postgresdb -c "\t"
psql: could not connect to server: Connection timed out
    Is the server running on host "pg" (10.245.102.15) and accepting
    TCP/IP connections on port 5432?

This is tested on a DigitalOcean a single node cluster (1.12.3).

Postgres listened on * on the correct port, pg_hba.conf looks by default like this:

...
local   all             all                                     trust
# IPv4 local connections:
host    all             all             127.0.0.1/32            trust
# IPv6 local connections:
host    all             all             ::1/128                 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     all                                     trust
host    replication     all             127.0.0.1/32            trust
host    replication     all             ::1/128                 trust
host all all all md5

To reproduce see this gist

Execute via (please use a fresh cluster and read thru):

export k8sconf=/path/to/your/k8s/confic/file
kubectl --kubeconfig $k8sconf apply -f https://gist.githubusercontent.com/sontags/c364751e7f0d8ba1a02a9805efc68db6/raw/01b1808348541d743d6a861402cfba224bee8971/database.yaml
kubectl --kubeconfig $k8sconf -n database exec -it $(kubectl --kubeconfig $k8sconf -n database get pods -o jsonpath='{.items[*].metadata.name}') -- /bin/bash /reproducer/runtest.sh

Any hint why the service does not allow to connect or other tests to perform?

Upvotes: 1

Views: 1015

Answers (1)

Rico
Rico

Reputation: 61551

Hard to tell without access to your cluster. This works fine on my AWS cluster. Some things to look at:

  • Is the kube-proxy running on all nodes?
  • Is your network overlay/CNI running on all nodes?
  • Does this happen with the pg pod only? what about other pods?
  • DNS seems to be fine since pg is being resolved to 10.245.102.15
  • Are your nodes allowing IP forwarding from the Linux side?
  • Are your Digital Ocean firewall rules allowing traffic from any source on port 5432? Note that the PodCidr and K8s Service IP range is different the hostCidr (of your droplets).

Upvotes: 2

Related Questions