In Kubernetes,
a pod is the smallest API object, or in more technical terms, it is the atomic programming unit of Kubernetes. In a cluster, a pod represents a running application process. It contains one or more containers along with resources shared by each container, such as storage and networking.
The status of a pod tells you what stage of the life cycle it is currently in. There are five stages in the life cycle of
a pod:
- Pending: This status shows that at least one container has not yet
- Running: All containers have been created and the pod has been bound to a node. At this point, the containers are running, or are being started or restarted.
- Success: All containers in the pod have been successfully terminated and will not restart.
- Error: All containers have terminated and at least one container has failed. The failed container exists in a nonzero state.
- Unknown: Unable to obtain capsule status.
been created within the pod.
Sometimes when something goes wrong with one of your pods, for example, your pod has an error that ends unexpectedly, you’ll need to restart your Kubernetes pod. This tutorial will show you how to use kubectl to restart a pod.
Why You May Want to Restart a Pod
First, let’s talk about some reasons why you might restart your pods:
- Resource usage is not indicated or when software behaves unexpectedly. If a container with 600 Mi of memory tries to allocate additional memory, the pod will end up with an OOM. You must restart the pod in this situation after modifying the resource specifications.
- A capsule is stuck in a state of termination. This is found looking for pods that have had all their containers finished but the capsule is still working. This typically occurs when a cluster node is taken out of service unexpectedly and the cluster scheduler and controller-administrator cannot clean all pods on that node.
- Unable to correct an error.
- Timeouts.
- Erroneous deployments.
- Request persistent volumes that are not available.
Restarting Kubernetes pods
using kubectl
You can use docker restart {container_id} to restart a container in the Docker process, but there is no restart command in Kubernetes. In other words, there is no kubectl restart {podname}.
Your pod may occasionally develop a problem and suddenly shut down, forcing you to restart the pod. But there is no effective method to restart it, especially if there is no YAML file. Fear not, let’s go over a list of options for using kubectl to restart a Kubernetes pod.
Method 1: Kubectl scale
When there is no YAML file, a quick solution is to scale the number of replicas using the kubectl scale command and set the replica flag to zero:
kubectl scale deployment shop -replicas=0 -n service
kubectl get pods -n service
NameReadyStatusRestartsAgeapi-7996469c47-d7zl21/1Running011dapi-7996469c47-tdr2n1/1Running011dshop-5796d5bc7c-2jdr50/1Terminating02dshop-5796d5bc7c-xsl6p0/
1Terminating02d
Note that the Deployment object is not a direct pod object, but a ReplicaSet object, which consists of the definition of the number of replicas and the pod template.
Example: Pod template used by ReplicaSet to create new
pods This command scales the number of replicas
that should run to zero.
kubectl get pods -n service NameReadyStatusRestartsAgeapi-7996469c47-d7zl21/1Running011dapi-7996469c47-tdr2n1/1Running011d
To restart the pod, set the number of replicas to at least one:Check the pods
now:kubectl
scale deployment shop -replicas=0 –
n service kubectl get
pods -n service NameReadyStatusRestartsAgeapi-7996469c47-d7zl21/1Running011dapi-7996469c47-tdr2n1/1Running011dshop-5796d5bc7c-2jdr51/1Running03sshop-5796d5bc7c-xsl6p1/1Running03s
YourKkubernetes pods have successfully restarted
.
Method 2: Restart
the kubectl deployment
Method 1 is a quicker solution, but the easiest way to restart Kubernetes pods is to use the deployment restart command.
The controller kills one pod at a time, relying on the ReplicaSet to scale new pods until all of them are newer than at the time the controller was resumed. Implementing reboot is the ideal approach to restarting pods because the app won’t be affected or stop working.
To implement a restart, use the following
command:Method 3: kubectl delete pod
Because Kubernetes is a declarative API, the API pod object will contradict the expected one after it is deleted, using the kubectl delete pod <pod_name> -n <namespace> command.
It will automatically recreate the pod to keep it consistent with the expected, but if ReplicaSet manages many pod objects, then it will be very problematic to manually delete them one by one. You can use
the following command to delete the ReplicaSet:
Method 4: kubectl get
pod Use the following command
:
Here, since there is no YAML file and the pod object starts, it cannot be directly deleted or scaled to zero, but it can be restarted using the above command. The meaning of this command is to get the YAML instruction from currently running pods and pipe the output to kubectl replace, the standard input command to achieve the purpose of a reboot.
In
this roundup, you’ve been briefly introduced to Kubernetes pods, as well as some reasons why you might need to restart them. In general, the most recommended way to ensure no application downtime is to use kubectl rollout restart deployment <deployment_name> -n <namespace>.
While Kubernetes is in charge of pod orchestration, it’s no easy task to continually ensure that pods always have highly accessible and affordable nodes that are fully employed.