Reputation: 905
I made a simple deployment of an nginx pod and afterwards edited the deployment to add a readinessProbe and a livenessProbe via TCP like in the official docs.
Once I save it the deployment created a new replicaSet and started the new pod, but the probes never get fulfilled.
Here is the deployment yaml output of the describe command:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: "2020-09-21T18:51:13Z"
generation: 2
labels:
app: dep1
name: dep1
namespace: default
resourceVersion: "1683893"
selfLink: /apis/apps/v1/namespaces/default/deployments/dep1
uid: b23bceff-aca5-4c89-84c0-5882cf2df217
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: dep1
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: dep1
spec:
containers:
- image: nginx
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 20
successThreshold: 1
tcpSocket:
port: 8080
timeoutSeconds: 1
name: nginx
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 8080
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-09-21T18:51:16Z"
lastUpdateTime: "2020-09-21T18:51:16Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-09-21T18:51:13Z"
lastUpdateTime: "2020-09-21T19:16:07Z"
message: ReplicaSet "dep1-5d66c67794" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 1
replicas: 2
unavailableReplicas: 1
updatedReplicas: 1
And here are the events of the pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/dep1-5d66c67794-qd48q to docker-desktop
Normal Pulling 13m (x2 over 14m) kubelet, docker-desktop Pulling image "nginx"
Normal Killing 13m kubelet, docker-desktop Container nginx failed liveness probe, will be restarted
Normal Pulled 13m (x2 over 14m) kubelet, docker-desktop Successfully pulled image "nginx"
Normal Created 13m (x2 over 14m) kubelet, docker-desktop Created container nginx
Normal Started 13m (x2 over 14m) kubelet, docker-desktop Started container nginx
Warning Unhealthy 12m (x5 over 14m) kubelet, docker-desktop Liveness probe failed: dial tcp 10.1.0.174:8080: connect: connection refused
Warning Unhealthy 9m48s (x30 over 14m) kubelet, docker-desktop Readiness probe failed: dial tcp 10.1.0.174:8080: connect: connection refused
Warning BackOff 4m42s (x11 over 8m36s) kubelet, docker-desktop Back-off restarting failed container
Why is the connection refused when I opened the ports with the following?
ports:
- containerPort: 8080
protocol: TCP
Upvotes: 0
Views: 246
Reputation: 1403
By default, nginx webserver exposes the port 80, not just your health checks aren't working but your application will never open on port 8080. The docker image used in this tutorial is k8s.gcr.io/goproxy:0.1
, you are using nginx
. Try this config or change your image deployment to k8s.gcr.io/goproxy:0.1
:
spec:
containers:
- image: nginx
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 20
successThreshold: 1
tcpSocket:
port: 80
timeoutSeconds: 1
name: nginx
ports:
- containerPort: 80
protocol: TCP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 80
timeoutSeconds: 1
Upvotes: 2