Reputation: 14800
Why do I need 3 different kind of probes in kubernetes:
There are some questions (k8s - livenessProbe vs readinessProbe, Setting up a readiness, liveness or startup probe) and articles about this topic. But this is not so clear:
Upvotes: 70
Views: 34561
Reputation: 29
startupProbe: Purpose: The startup probe determines whether an application within a container has successfully started and is ready to handle requests. LivenessProbe: Purpose: The liveness probe monitors the ongoing health of a container and determines whether it is running properly. Readiness Probe: Purpose: The readiness probe determines whether a container is ready to handle incoming requests and serve traffic.
Upvotes: -2
Reputation: 3828
The difference between livenessProbe, readinessProbe, and startupProbe
livenessProbe:
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
startupProbe:
startupProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
Check K8S documenation for more.
Upvotes: 14
Reputation: 16299
Here's a concrete example of one we're using in our app. It has a single crude HTTP healthcheck, accessible on http://hostname:8080/management/health
.
ports:
- containerPort: 8080
name: web-traffic
startupProbe:
successThreshold: 1
failureThreshold: 18
periodSeconds: 10
timeoutSeconds: 5
httpGet:
path: /management/health
port: web-traffic
readinessProbe:
successThreshold: 2
failureThreshold: 2
periodSeconds: 10
timeoutSeconds: 5
httpGet:
path: /management/health
port: web-traffic
livenessProbe:
successThreshold: 1
failureThreshold: 3
periodSeconds: 30
timeoutSeconds: 5
httpGet:
path: /management/health
port: web-traffic
Upvotes: 19
Reputation: 37034
This topic very interesting but I would like to add 1 more detail (important from my point of view).
I was not able to understand the difference between startup probe + liveness probe from one side and liveness probe with extended amount of failedTheshold.
And looks like differenceis in a nature of startup probe to be a single time executed. In contrast liveness probe and readiness probe are cyclic (or periodic, repeatable).
If start up probe is passed - it won't be executed until pod is alive. So the idea of startup probe to give a bit more time for application startup.
Lets consider example
readinessProbe:
initialDelay: 20
failureThreshold: 2
periodSeconds: 10
livenessProbe:
initialDelay: 10
failureThreshold: 3
periodSeconds: 5
We wait for 40 sec [(20 + 210)] startup and then 25 sec [(10 + 53)] for become alive but consider application is not alive after 25 [(10 + 5*3)] seconds of unavailability
Upvotes: -1
Reputation: 14800
These 3 kind of probes have 3 different use cases. That's why we need 3 kind of probes.
Liveness Probe
If the Liveness Probe fails, the pod will be restarted (read more about failureThreshold).
Use case: Restart pod, if the pod is dead.
Best practices: Only include basic checks in the liveness probe. Never include checks on connections to other services (e.g. database). The check shouldn't take too long to complete. Always specify a light Liveness Probe to make sure that the pod will be restarted, if the pod is really dead.
Startup Probe
Startup Probes check, when the pod is available after startup.
Use case: Send traffic to the pod, as soon as the pod is available after startup. Startup probes might take longer to complete, because they are only called on initializing. They might call a warmup task (but also consider init containers for initialization). After the Startup probe succeeds, the liveliness probe is called.
Best practices: Specify a Startup Probe, if the pod takes a long time to start. The Startup and Liveness Probe can use the same endpoint, but the Startup Probe can have a less strict failure threshhold which prevents a failure on startup (s. Kubernetes in Action).
Readiness Probe
In contrast to Startup Probes Readiness Probes check, if the pod is available during the complete lifecycle. In contrast to Liveness Probes only the traffic to the pod is stopped, if the Readiness probe fails, but there will be no restart.
Use case: Stop sending traffic to the pod, if the pod can not temporarily serve because a connection to another service (e.g. database) fails and the pod will recover later.
Best practices: Include all necessary checks including connections to vital services. Nevertheless the check shouldn't take too long to complete. Always specify a Readiness Probe to make sure that the pod only gets traffic, if the pod can properly handle incoming requests.
Documentation
Upvotes: 143
Reputation: 10358
I think the below table describes the use-cases for each.
Feature |
Readiness Probe |
Liveness Probe |
Startup Probe |
---|---|---|---|
Exmine | Indicates whether the container is ready to service requests. | Indicates whether the container is running. | Indicates whether the application within the container is started. |
On Failure | If the readiness probe fails, the endpoints controller removes the pod's IP address from the endpoints of all services that match the pod. | If the liveness probe fails, the kubelet kills the container, and the container is subjected to its restart policy. | If the startup probe fails, the kubelet kills the container, and the container is subjected to its restart policy. |
Default Case | The default state of readiness before the initial delay is Failure. If a container does not provide a readiness probe, the default state is Success. | If a container does not provide a liveness probe, the default state is Success. | If a container does not provide a startup probe, the default state is Success. |
Sources:
Upvotes: 6