Rotareti
Rotareti

Reputation: 53943

Connection Refused error when connecting to Kubernetes Redis Service

I have a single instance Redis Deployment/Service on my cluster:

Redis.yaml

---

apiVersion: v1
kind: Service
metadata:
  name: myapp-redis
  labels:
    name: myapp-redis
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    name: myapp-redis

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myapp-redis
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: myapp-redis
  labels:
    name: myapp-redis
spec:
  selector:
    matchLabels:
      name: myapp-redis
  strategy:
    type: Recreate
  replicas: 1
  template:
    metadata:
      labels:
        name: myapp-redis
    spec:
      containers:
      - name: myapp-redis
        image: registry/myapp-redis:0.0.0-alpha.13
        imagePullPolicy: Always
        ports:
        - containerPort: 6379
        volumeMounts:
        - name: myapp-redis
          mountPath: /etc/redis/
      imagePullSecrets:
      - name: regsecret
      volumes:
      - name: myapp-redis
        persistentVolumeClaim:
          claimName: myapp-redis

---

Redis Service Description

I get this from kubectl describe svc myapp-redis -n mw-dev:

Name:              myapp-redis
Namespace:         mw-dev
Labels:            name=myapp-redis
Annotations:       kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"myapp-redis"},"name":"myapp-redis","namespace":"mw-dev"},"sp...
Selector:          name=myapp-redis
Type:              ClusterIP
IP:                10.3.0.137
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         10.2.2.173:6379
Session Affinity:  None
Events:            <none>

Check if redis is up and running

Making sure the database is up and running, I can open a shell inside the pod with kubectl exec -it myapp-redis-[..] sh -n mw-dev and ping the database with redis-cli -a test ping. If I do that, I receive a PONG, so it seems that the password (test) resolves and the db is up.

Problem connecting python app to redis service

However, if I try to connect a pod running a Python app to the redis db, I get a connection refused error from the Python app.

kubectl logs myapp-backend-596... -n mw-dev

[...]
  File "/usr/local/lib/python3.6/site-packages/aioredis/stream.py", line 19, in open_connection
    lambda: protocol, host, port, **kwds)
  File "uvloop/loop.pyx", line 1733, in create_connection
  File "uvloop/loop.pyx", line 1712, in uvloop.loop.Loop.create_connection
ConnectionRefusedError: [Errno 111] Connection refused

This is the configuration for the Python app:

Backend.yaml

---

apiVersion: v1
kind: Service
metadata:
  name: myapp-backend
  labels:
    name: myapp-backend
spec:
  ports:
  - port: 8000
    targetPort: 8000
  selector:
    name: myapp-backend

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: myapp-backend
  labels:
    name: myapp-backend
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: myapp-backend
    spec:
      containers:
      - name: myapp-backend
        image: registry/myapp-backend:0.0.0-alpha.13
        imagePullPolicy: Always
        ports:
        - containerPort: 8000
        env:
        - name: REDIS_HOST
          value: 'myapp-redis'
        - name: REDIS_PASSWORD
          value: 'test'
      imagePullSecrets:
      - name: regsecret

---

Python Backend pod describtion

This is what I get from kubectl describe po myapp-backend-58... -n mw-dev:

Name:           myapp-backend-585d...
Namespace:      mw-dev
Node:           worker-2/ip...
Start Time:     Sat, 03 Feb 2018 13:08:01 +0100
Labels:         name=myapp-backend
                pod-template-hash=myhash
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"mw-dev","name":"myapp-backend-58...","uid":"e13...
Status:         Running
IP:             10.2.2.180
Controlled By:  ReplicaSet/myapp-backend-58...
Containers:
  myapp-backend:
    Container ID:   docker://78cfc218d...
    Image:          registry/myapp-backend:0.0.0-alpha.13
    Image ID:       docker-pullable://registry/mw-dev/myapp-backend@sha256:785a...
    Port:           8000/TCP
    State:        registryg
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 03 Feb 2018 13:55:07 +0100
      Finished:     Sat, 03 Feb 2018 13:55:08 +0100
    Ready:          False
    Restart Count:  14
    Environment:
      REDIS_HOST:      myapp-redis
      REDIS_PASSWORD:  test
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7... (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  default-token-7cm7c:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-7...
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                  From                          Message
  ----     ------                 ----                 ----                          -------
  Normal   Scheduled              50m                  default-scheduler             Successfully assigned myapp-backend-58... to worker-2
  Normal   SuccessfulMountVolume  50m                  kubelet, worker-2  MountVolume.SetUp succeeded for volume "default-token-7..."
  Warning  BackOff                50m (x4 over 50m)    kubelet, worker-2  Back-off restarting failed container
  Normal   Pulling                50m (x4 over 50m)    kubelet, worker-2  pulling image "registry/mw-dev/myapp-backend:0.0.0-alpha.13"
  Normal   Pulled                 50m (x4 over 50m)    kubelet, worker-2  Successfully pulled image "registry/mw-dev/myapp-backend:0.0.0-alpha.13"
  Normal   Created                50m (x4 over 50m)    kubelet, worker-2  Created container
  Normal   Started                50m (x4 over 50m)    kubelet, worker-2  Started container
  Warning  FailedSync             52s (x229 over 50m)  kubelet, worker-2  Error syncing pod

Running pods

kubectl get pods --all-namespaces:

NAMESPACE     NAME                                                          READY     STATUS    RESTARTS   AGE
kube-system   cert-manager-cert-manager-59fff59c7b-vdnd7                    2/2       Running   4          3d
kube-system   digitalocean-cloud-controller-manager-6d6b675bfd-nxqq2        1/1       Running   0          3d
kube-system   digitalocean-provisioner-d4c79dfb4-mhb5d                      1/1       Running   0          3d
kube-system   heapster-56bf7c7896-9rv4z                                     1/1       Running   0          3d
kube-system   kube-apiserver-wp7b4                                          1/1       Running   5          10d
kube-system   kube-controller-manager-586c9b745b-gkqk4                      1/1       Running   2          10d
kube-system   kube-controller-manager-586c9b745b-pdhw7                      1/1       Running   1          10d
kube-system   kube-dns-7d74988c8b-z9zs2                                     3/3       Running   0          10d
kube-system   kube-flannel-5wlk6                                            2/2       Running   0          10d
kube-system   kube-flannel-khsvq                                            2/2       Running   0          10d
kube-system   kube-flannel-skt2m                                            2/2       Running   4          10d
kube-system   kube-proxy-cwqv8                                              1/1       Running   2          10d
kube-system   kube-proxy-mg8jx                                              1/1       Running   0          10d
kube-system   kube-proxy-vmw8g                                              1/1       Running   0          10d
kube-system   kube-scheduler-7686847675-5kkhn                               1/1       Running   1          10d
kube-system   kube-scheduler-7686847675-lkm98                               1/1       Running   2          10d
kube-system   kubernetes-dashboard-7658f8d76-svtzh                          1/1       Running   0          3d
kube-system   loadbalancer-nginx-ingress-controller-8649c7986b-jndzz        1/1       Running   3          3d
kube-system   loadbalancer-nginx-ingress-default-backend-6fb9444c64-bpz4g   1/1       Running   0          3d
kube-system   pod-checkpointer-kfcpp                                        1/1       Running   0          10d
kube-system   pod-checkpointer-kfcpp-spc1aitu1i-master-1                    1/1       Running   0          10d
kube-system   tiller-deploy-fb8d7b69c-6xrpn                                 1/1       Running   2          3d
mw-dev        myapp-backend-6c4b56d9b7-2mfbs                                1/1       Running   0          21m
mw-dev        myapp-frontend-7478fd456b-5ztvq                               1/1       Running   0          1d
mw-dev        myapp-redis-67d45d97d7-7wxtj                                  1/1       Running   0          1d

Making sure Python app received correct values for env vars

The Python app prints out the values that it uses to connect to the database. Looking at the pod logs, I can see that the values are identical to the ones given in Backend.yaml (REDIS_HOST=myapp-redis, REDIS_PASSWORD=test).

It works locally in docker

If I run the redis container and the python app container locally with docker on my laptop, they connect fine.

Cluster Info

The cluster uses a nginx-ingress controller to expose services to the internet. I'm not sure if this matters, since I need to connect the Python pod to the redis service internally.

The cluster consists of 1 Master node, two Worker nodes and a LoadBlancer for the nginx-ingress controller, all running on DigitalOcean.

What now?

At this point I have no idea how to further debug the issue. I have searched the web for hours to find a solution, w/o luck. Any suggestion would be appreciated!

Upvotes: 4

Views: 16237

Answers (1)

Rotareti
Rotareti

Reputation: 53943

The connection refused error was caused by the redis configuration.

I had to change the redis host from localhost to 0.0.0.0 in order to allow external connections.

In redis.conf I changed this line:

bind 127.0.0.1

to this:

bind 0.0.0.0

Upvotes: 10

Related Questions