Reputation: 1
I have installed redis in K8S Cluster via Helm with Namespace redis1 and using port 6379,26379.
And I installed another redis in the same K8S Cluster via Helm with Namespace redis2 and using port 6380,26380.
redis1 works but redis2 error :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned redis2/redis-redis-ha-server-0 to worker3
Normal Pulled 30m kubelet Container image "redis:v5.0.6-alpine" already present on machine
Normal Created 30m kubelet Created container config-init
Normal Started 30m kubelet Started container config-init
Normal Pulled 29m kubelet Container image "redis:v5.0.6-alpine" already present on machine
Normal Created 29m kubelet Created container redis
Normal Started 29m kubelet Started container redis
Normal Killing 28m (x2 over 29m) kubelet Container sentinel failed liveness probe, will be restarted
Normal Pulled 28m (x3 over 29m) kubelet Container image "redis:v5.0.6-alpine" already present on machine
Normal Created 28m (x3 over 29m) kubelet Created container sentinel
Normal Started 28m (x3 over 29m) kubelet Started container sentinel
Warning Unhealthy 14m (x25 over 29m) kubelet Liveness probe failed: dial tcp xx.xxx.x.xxx:26380: connect: connection refused
Warning BackOff 4m56s (x85 over 25m) kubelet Back-off restarting failed container
I had previously installed rabbitmq the same way before in the same cluster it works. So I hope I can use the same method with redis.
Please advise what should be done.
Upvotes: 0
Views: 1230
Reputation: 4614
As this issue was resolved in the comments section by @David Maze, I decided to provide a Community Wiki answer just for better visibility to other community members.
Services in Kubernetes allow applications to receive traffic and can be exposed in different ways as there are different types of Kubernetes services (see: Overview of Kubernetes Services). In case of the default ClusterIP type, it exposes the Service on an internal IP (each Service also has its own IP address) in the cluster and makes the Service only reachable from within the cluster. Each Service has its own IP address, so it's okay if they listen on the same port (but each on their own IP address).
Below is a simple example to illustrate that it's possible to have two (or more) Services listening on the same port ( 80
port).
I've created two Deployments (app1
and app2
) and exposed it with ClusterIP
Services using the same port number:
$ kubectl create deploy app-1 --image=nginx
deployment.apps/app-1 created
$ kubectl create deploy app-2 --image=nginx
deployment.apps/app-2 created
$ kubectl expose deploy app-1 --port=80
service/app-1 exposed
$ kubectl expose deploy app-2 --port=80
service/app-2 exposed
$ kubectl get pod,svc
NAME READY STATUS RESTARTS
pod/app-1-5d9ccdb595-x5s55 1/1 Running 0
pod/app-2-7747dcb588-trj8d 1/1 Running 0
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/app-1 ClusterIP 10.8.12.54 <none> 80/TCP
service/app-2 ClusterIP 10.8.11.181 <none> 80/TCP
Finally, we can check if it works as expected:
$ kubectl run test --image=nginx
pod/test created
$ kubectl exec -it test -- bash
root@test:/# curl 10.8.12.54:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
root@test:/# curl 10.8.11.181:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Upvotes: 0