Probe Your Way to Success: Mastering Kubernetes Readiness and Liveness Probes


Kubernetes probes feature enables you to keep an eye on the status of your containers and act properly as needed. Liveness probes and readiness probes are the two different categories of probes.

Readiness and Liveness probes, what is the difference?

A readiness probe checks if a container is ready to serve traffic. If the readiness probe fails, Kubernetes will not start sending traffic to the container until it becomes ready. For example, if a database container is performing an upgrade, the readiness probe could detect this and prevent traffic from being sent until the upgrade is complete.

On the other hand, a liveness probe checks if a container is still running. If the liveness probe fails, Kubernetes will restart the container to try and restore it to a healthy state. For example, if a web server container is no longer serving requests, the liveness probe could detect this and trigger a restart.

There are numerous ways to implement liveness and readiness probes, such as HTTP requests, TCP socket connections, and command execution. The specific implementation will depend on the application you're running in your container.

K8s has a third type of probes: Startup probes.

 You may have to deal with legacy apps that could take longer to start up the first time. In these situations, setting up liveness probe parameters might be challenging without sacrificing the quick response time that led to the probe's creation. Setting up a startup probe with the same command, HTTP or TCP check, and a failureThreshold * periodSeconds long enough to cover for the worst-case startup delay is the trick. When such a probe is enabled, it suppresses liveness and readiness checks until it succeeds, ensuring that those probes do not interfere with application startup.

Here is an example of three types of probes used for one kube-manifest:

What if we don't specify a liveness probe?

Without a liveness probe specified, K8s will determine whether to restart your container based on the PID 1 process of the container. All other processes running inside the container are children of the PID 1 process. The first process inside a container will take on the particular responsibilities of PID 1 because each container starts out with its own process namespace.

The container is presumed to have died by K8s if the PID 1 process exits and no liveness probe is defined, which happens in most cases safely. The only universally effective, application-neutral corrective measure is to restart the process. K8s will leave the container running as long as PID 1 is active, regardless of whether any child processes are active.

This default behavior might be exactly what you want if your application has one process, which is PID 1, in which case you won't need a liveness probe. It might not be what you want if you use an init tool like tini or dumb-init. Each application must decide whether to design its own liveness probe or to follow the default behavior.

What if we don't specify a readiness probe? 

When PID 1 starts, K8s will assume that the container is prepared to handle traffic if a readiness probe is not specified. You should never want this.

Any time a new container starts up, such as on scaling events or deployments, assumptions about readiness without verification will result in problems (such as 502s from the K8s router). Every time you deploy without a readiness probe, you will get bursts of errors as the old containers shut down and the new ones start up.

If you are utilizing autoscaling, new instances may start and stop at any time, especially during periods of fluctuating traffic, based on the metric threshold you establish. You will get bursts of errors as the application scales up or down because the load balancer distributes containers that are not quite ready to receive network traffic.

By providing a readiness probe, you may quickly resolve these issues. The probe provides a means for K8s to determine whether your container is prepared to handle the traffic.

Probes methods overview:

  • HTTP GET Requests: The container runs an HTTP server, and the probe sends an HTTP GET request to a specified endpoint. The probe considers the container to be healthy if it receives a response with a success status code (e.g., 200 OK).

  • TCP Socket Connections: The container listens on a specified TCP port, and the probe tries to establish a socket connection to the port. The probe considers the container to be healthy if the connection is successful.

  • Command Execution: The probe runs a specified command inside the container and considers the container to be healthy if the command returns a zero exit code.

Also, each type of probe has common configurable fields:

  • initialDelaySeconds: Probes will run after initialDelaySeconds after the container is started (default: 0)

  • periodSeconds: How often to run probe (default value: 10)

  • timeoutSeconds: Probe timeout (default value: 1)

  • failureThreshold: When a probe fails, it will try failureThreshold times before deeming unhealthy/not ready (default value: 3)

  • successThreshold: Required number of successful probes to mark container healthy/ready (default value: 1)

HTTP Probe

The HTTP probe initiates an HTTP request, and the status code determines the healthy state; anything between 200 and 400 is considered a success. Any status code other than those listed is considered unhealthy.

TCP Probe

TCP probe determines whether a TCP connection can be established on the specified port. An open port is considered successful, while a closed port or a reset is considered failed.

Exec Probe

Exec probe runs a command within the container without the use of a shell. The exit status of the command determines a healthy state - zero is healthy, and everything other is unhealthy.

There are numerous ways to implement liveness and readiness probes, such as HTTP requests, TCP socket connections, and command execution. Depending on the application you're running in your container, the specific implementation will vary.

It's also worth noting that the configuration of liveness and readiness probes can be specified in a Kubernetes pod definition, and can be customized to meet the needs of your specific use case. For example, you can specify the frequency of the probes, the number of failures that are allowed before the container is considered to be unresponsive, and the amount of time that the probe will wait for a response.


Both readiness and liveness probes are important for ensuring the proper functioning of your containers in a Kubernetes environment. The choice of which probe is more important may depend on the specific requirements of your application. Liveness probes are crucial for detecting and fixing problems that prevent a container from functioning correctly. They help ensure that containers are restarted if they become unresponsive so that they can return to a healthy state. Readiness probes are essential for controlling traffic to a container. They help ensure that traffic is only sent to containers that are ready to receive it, preventing traffic from being sent to containers that are not yet ready or that are undergoing maintenance.

In general, it's recommended to use both liveness and readiness probes to achieve a complete health-checking solution. This way, you can detect and fix problems that prevent a container from functioning correctly, as well as control traffic to containers in a way that prevents disruptions and improves overall system performance.

To summarize, Kubernetes probes are a really helpful tool for guaranteeing container stability and availability, as well as recognizing and responding to problems in a timely manner.

In Apprecode we already use the latest version of Kubernetes for our leading customers. Please contact us for more information.

Read also

Mastering Kubernetes Orchestration: Climbing the Ladder to Become a Certified Engineer

Kubernetes certifications are designed to validate the skills and knowledge of individuals who are working with Kubernetes, an open-source container orchestration platform.

Electrifying updates: new Kubernetes 1.26 release

A new Kubernetes version 1.26, named Electrifying, has been released. It has interesting updates in alpha and beta. Let's dive deeper.