Reputation: 1241
On my macOS (not using Minikube), I have modeled my Kubernetes cluster after this example, which means I have executed this verbatim and in this order:
# Adding my own service to redix-proxy
kubectl create -f ./redis/redis-service.yaml
# Create a bootstrap master
kubectl create -f examples/storage/redis/redis-master.yaml
# Create a service to track the sentinels
kubectl create -f examples/storage/redis/redis-sentinel-service.yaml
# Create a replication controller for redis servers
kubectl create -f examples/storage/redis/redis-controller.yaml
# Create a replication controller for redis sentinels
kubectl create -f examples/storage/redis/redis-sentinel-controller.yaml
# Scale both replication controllers
kubectl scale rc redis --replicas=3
kubectl scale rc redis-sentinel --replicas=3
# Adding my own NodeJS web client server
kubectl create -f web-deployment.yaml
The only difference is in redis-proxy.yaml
I used the image image: kubernetes/redis-proxy
instead of image: kubernetes/redis-proxy:v2
because I wasn't able to pull the latter.
These are the objects I pass to ioredis to create my Redis instances (one for sessions and one as the main one):
config.js
main: {
host: 'redis',
port: 6379,
db: 5
},
session: {
host: 'redis',
port: 6379,
db: 6
}
In my web client web-3448218364-sf1q0
pod, I get this repeated in the logs:
INFO: ctn/53 on web-3448218364-sf1q0: Connected to Redis event
WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: read ECONNRESET] code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' }
INFO: ctn/53 on web-3448218364-sf1q0: Connected to Redis event
WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: read ECONNRESET] code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' }
INFO: ctn/53 on web-3448218364-sf1q0: Connected to Redis event
WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: read ECONNRESET] code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' }
WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: connect ETIMEDOUT] errorno: 'ETIMEDOUT', code: 'ETIMEDOUT', syscall: 'connect' }
WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: connect ETIMEDOUT] errorno: 'ETIMEDOUT', code: 'ETIMEDOUT', syscall: 'connect' }
WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: connect ETIMEDOUT] errorno: 'ETIMEDOUT', code: 'ETIMEDOUT', syscall: 'connect' }
WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: connect ETIMEDOUT] errorno: 'ETIMEDOUT', code: 'ETIMEDOUT', syscall: 'connect' }
INFO: ctn/53 on web-3448218364-sf1q0: Connected to Redis event
WARN: ctn/53 on web-3448218364-sf1q0: Redis Connection Error: { [Error: read ECONNRESET] code: 'ECONNRESET', errno: 'ECONNRESET', syscall: 'read' }
INFO: ctn/53 on web-3448218364-sf1q0: Connected to Redis event
In my Redis redis-proxy
pod, I get this repeated in the logs:
Error connecting to read: dial tcp :0: connection refused
Cluster info:
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.91.240.1 <none> 443/TCP 2d
redis 10.91.251.170 <none> 6379/TCP 31m
redis-sentinel 10.91.250.118 <none> 26379/TCP 31m
web 10.91.240.16 <none> 80/TCP 31m
$ kubectl get po
NAME READY STATUS RESTARTS AGE
redis-2frd0 1/1 Running 0 34m
redis-master 2/2 Running 0 34m
redis-n4x6f 1/1 Running 0 34m
redis-proxy 1/1 Running 0 34m
redis-sentinel-k8tbl 1/1 Running 0 34m
redis-sentinel-kzd66 1/1 Running 0 34m
redis-sentinel-wlzsb 1/1 Running 0 34m
web-3448218364-sf1q0 1/1 Running 0 34m
$ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
web 1 1 1 1 39m
Question 1) Now, I need to actually connect my application to a Redis pod. I should be connecting to the redis-proxy
pod right? So, I created this redis-service.yaml
service:
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- port: 6379
targetPort: 6379
selector:
name: redis-proxy
role: proxy
I believe I have connected to redis
at port 6379 since I usually will get another error message if this is so. Going into the bash shell of my web container web-3448218364-sf1q0
, I see the printenv
variables of REDIS_SERVICE_PORT=6379
and REDIS_SERVICE_HOST=10.91.251.170
.
Question 2) From my error logs, what does it mean by dial tcp :0:
? From my interactive Kubernetes console under Services and in the Internal Endpoints column, I see this for the redis
service:
redis:6379 TCP
redis:0 TCP
Is this 0 TCP
related to that? All of my services have 0 TCP listed in the console, but as you can see, not from the CLI in kubectl get svc
.
Upvotes: 7
Views: 9991
Reputation: 6604
Always the first thing to check when a kubernetes service does not behave as expected is to check the endpoints of the corresponding service. In your case kubectl get ep redis
.
If my assumption is correct it should show you something like this
NAME ENDPOINTS AGE
redis <none> 42d
This means that your service does not select/match any pods.
In your service spec there is the key selector:
this selector has to match the labels of the actual deployment you have. You are selecting for all pods with the labels name: redis-proxy
and role: proxy
which are potentially not matching any pod.
You can run kubectl get pod --show-labels=true
to show the labels on the pods and change your service accordingly.
I don't know what the port 0 means in this context. Sometimes it is used to do only DNS resolution with the service.
Upvotes: 10
Reputation: 730
From the deployment you posted above:
apiVersion: v1
kind: Pod
metadata:
labels:
name: redis
redis-sentinel: "true"
role: master
name: redis-master
spec:
containers:
- name: master
image: k8s.gcr.io/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- name: sentinel
image: kubernetes/redis:v1
env:
- name: SENTINEL
value: "true"
ports:
- containerPort: 26379
volumes:
- name: data
emptyDir: {}
You can see the container port of the sentinel is 26379
Thus in the service (from the example)
apiVersion: v1
kind: Service
metadata:
labels:
name: sentinel
role: service
name: redis-sentinel
spec:
ports:
- port: 26379
targetPort: 26379
selector:
redis-sentinel: "true"
It again uses port 26379
From the ioredis docs (host modified for your use case):
var redis = new Redis({
sentinels: [{ host: 'redis-sentinel', port: 26379 }],
name: 'mymaster'
});
redis.set('foo', 'bar');
The sentinel is not technically a proxy, ioredis first connects to the sentinel to find our which node is the master, then is given connection info for that node.
tl;dr;
Change the service back to the one used in the example, and use redis-sentinel
as host and 26379
for port.
Upvotes: 0