In the ever-evolving landscape of container orchestration, Kubernetes has emerged as the undisputed champion, revolutionizing how we deploy and manage applications at scale. As the heartbeat of modern cloud-native ecosystems, Kubernetes thrives on maintaining the health and availability of its myriad of containers.
Like a vigilant physician, Kubernetes diligently monitors the well-being of these containers using a suite of health checks that encompass both liveness and readiness probes. But, like any artful practice, running these health checks effectively requires a nuanced understanding of their intricacies and how they interplay with the inner workings of your applications. In this article, we embark on an exploration into the world of Kubernetes health checks, unraveling the mysteries of probes, and equipping you with the knowledge to craft a resilient and fault-tolerant infrastructure that stands the test of time. So, fasten your seatbelts as we dive deep into the core of Kubernetes, where the pulse of your containers beats in harmony with the orchestration magic.
Why Probes Are Important?
Distributed systems can be hard to manage. Since the separate components work independently, each part will keep running even after the other components have failed. At some point, an application may crash. Or an application might be still in the initialization stage and not yet ready to receive and process requests.
You can only assert the system’s health if all of its components are working. Using probes, you can determine whether a container is dead or alive, and decide if Kubernetes should temporarily prevent other containers from accessing it. Kubernetes verifies individual containers’ health to determine the overall pod health.
Types of Probes:
As you deploy and operate distributed applications, containers are created, started, run, and terminated. To check a container’s health in the different stages of its lifecycle, Kubernetes uses different types of probes.
A liveness probe is used to determine if a container is still running and functioning properly. This type of probe is used to detect and recover from container crashes or hang-ups. A liveness probe can be used to check the responsiveness of an application or to perform any other check that indicates the container is still alive and healthy. If the liveness probe fails, Kubernetes will attempt to restart the container to restore its functionality.
Each type of probe has its own configuration options, such as the endpoint to check, the probe interval, and the success and failure thresholds. By using these probes, Kubernetes can ensure that containers are running and healthy and can take appropriate action if a container fails to respond.
A readiness probe is used to determine if a container is ready to receive traffic. This type of probe is used to ensure that a container is fully up and running and can accept incoming connections before it is added to the service load balancer. A readiness probe can be used to check the availability of an application’s dependencies or to perform any other check that indicates the container is ready to serve traffic. If the readiness probe fails, the container is removed from the service load balancer until the probe succeeds again.
How to Implement Kubernetes Probes?
Kubernetes probes can be implemented in a few different ways. The first way is to use the Kubernetes API to query the application or service for information. This information can then be used to determine the application’s or service’s health. The second way is to use the HTTP protocol to send a request to the application or service. This request can be used to detect if an application or service is responsive, or if it is taking too long to respond. The third way is to use custom probes to detect specific conditions in an application or service. Custom probes can be use
to detect things such as resource usage, slow responses, or changes in the application or service.
Once you have decided which type of probe you will be using, you can then configure the probe using the Kubernetes API. You can specify the frequency of the probe, the type of probe, and the parameters of the probe. Once the probe is configured, you can then deploy it to the Kubernetes cluster.
How to Create Probes?
To create health check probes, you must issue requests against a container.
There are three ways of implementing Kubernetes liveness, readiness, and startup probes:
- Sending an HTTP request
- Opening a TCP socket
An HTTP request is a common and straightforward mechanism for creating a liveness probe. To expose an HTTP endpoint, you can implement any lightweight HTTP server in your container.
A Kubernetes probe will perform an HTTP GET request against your endpoint at the container’s IP to verify whether your service is alive. If your endpoint returns a success code, kubelet will consider the container alive and healthy. Otherwise, kubelet will terminate and restart the container.
YAML configuration file would look similar to this snippet:
readinessProbe: httpGet: path: /v1/ready port: 8080 initialDelaySeconds: 0 periodSeconds: 30 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 6 livenessProbe: httpGet: path: /v1/live port: 8080 initialDelaySeconds: 0 periodSeconds: 600 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 6
initialDelaySeconds: 0 Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 sec. Minimum value is 0.
periodSeconds: 30 How often (in seconds) to perform the probe. Default to 10 sec. Minimum value is 1.
timeoutSeconds: 10 Number of seconds request will wait to get response from service or else mark as failed. Defaults to 1 sec. Minimum value is 1.
successThreshold: 1 Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Minimum value is 1.
failureThreshold: 6 Maximum number of failed check after Pod will be marked Unready or Unhealthy. Defaults to 3. Minimum value is 1.
The configuration defines a deployment with initialDelaySeconds and periodSeconds properties that tell kubelet to execute a liveness probe every 30 seconds and wait container start before performing the first probe. Kubelet will check whether the container is alive and healthy by sending requests to the /v1/live path on port 8080 and expect a success result code.
We are all set with our yaml file. Assuming you have a running cluster ready, let’s deploy the above mentioned manifest file with the command.
Note: In this blog, we are using MicroK8s, a lightweight and easy-to-install Kubernetes distribution. MicroK8s provides a simplified Kubernetes environment for local development and testing purposes. As a result, the microk8s.kubectl command is used to interact with the Kubernetes cluster. If you are using a different Kubernetes distribution, you can replace microk8s.kubectl with your corresponding kubectl command.”
microk8s.Kuebctl apply -f deployment.yml
You should see the successful deployment of the file.
Let’s check the pod status with the following command to make sure the pods are running.
microk8s.Kubectl get pods
Let’s describe a pod using the following command.
microk8s.kubectl describe po example-deployment-5b7d7dd4cc-bnt98
You should see the following result
You can see the Liveness and Readiness status in the above image when you describe the pods.
Let’s check the events section.
You can see the different events, such as scheduled, pulled, created, and started. All the pod events were successful.
When a TCP socket probe is defined, Kubernetes tries to open a TCP connection on your container’s specified port. If Kubernetes succeeds, the container is considered healthy. TCP probes are helpful when HTTP or command probes are not adequate. Scenarios, where containers can benefit from TCP probes, include gRPC and FTP services, where the TCP protocol infrastructure already exists.
With the following configuration, kubelet will try to open a socket to your container on the specified port.
readinessProbe: tcpSocket: path: /v1/ready port: 8080 initialDelaySeconds: 0 periodSeconds: 30 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 6 livenessProbe: tcpSocket: path: /v1/live port: 8080 initialDelaySeconds: 0 periodSeconds: 600 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 6
The above configuration is similar to the HTTP check. It defines a readiness and a liveness probe. When the container starts, kubelet will wait 0 seconds to send the first readiness probe. After that, kubelet will keep checking the container readiness every 30 seconds.
Kubernetes probes are an important part of the Kubernetes platform, as they help ensure that applications and services run smoothly. They can be used to detect potential problems before they become serious, allowing you to take corrective action quickly. Kubernetes probes come in two types: liveness probes and readiness probes, as well as custom probes that can be used to detect specific conditions in an application or service. Implementing Kubernetes probes is a straightforward process that can be done using the Kubernetes API.
If you are looking for a way to ensure the health of your applications and services, Kubernetes probes are the way to go. So, make sure to implement Kubernetes probes in your Kubernetes environment today!