What are Kubernetes Pods and How Do They Work? – phoenixNAP

introduction

Kubernetes objects are the

fundamental persistent entities that describe the state of the Kubernetes cluster. Pods are the elemental objects and building blocks of the Kubernetes architecture.

This article will provide a complete beginner’s overview of Kubernetes pods. Understanding how pods work will help you understand the mechanism behind this container orchestration platform.

Kubernetes pod basics for beginners.What is a Kubernetes pod

? The

pod is

the smallest deployment unit of Kubernetes, an abstraction layer that hosts one or more OCI-compliant containers. Pods provide containers with the environment to run and ensure that containerized applications can access storage volumes, network, and configuration information.

Kubernetes vs. Containers vs

. Nodes vs. Clusters

Pods serve as a bridge connecting application containers with other higher concepts in the Kubernetes hierarchy. Here’s how pods compare to other essential elements of the Kubernetes orchestration platform.

  • Pods vs. Containers. A container packages all the libraries, dependencies, and other resources required for an application to function independently. On the other hand, a pod creates a container with dependencies that allow Kubernetes to manage application containers.
  • Pods vs. Nodes. A node in Kubernetes is a concept that refers to bare metal or virtual machines that are responsible for hosting pods. A single node can run multiple container pods. While every pod must have a node to run on, not all nodes host pods. The master node has a control plane that controls pod scheduling, while pods reside on worker nodes.
  • Pods vs. Cluster. A Kubernetes cluster is a group of nodes with at least one master node (high availability clusters require more than one master) and up to 5000 worker nodes. Clusters allow programming of pods on multiple nodes with different configurations and operating systems.

Types

of capsules

Depending on the number of containers they contain, capsules can be single-container and multi-container capsules. Below is a brief description of both types of capsules.

Single-container pods Pods

in Kubernetes typically host a single container that provides all the dependencies needed for an application to run. Single container pods are easy to create and offer a way for Kubernetes to control individual containers indirectly.

Multi-container pods Multi-container pods

host containers that depend on each other and share the same resources. Within such pods, containers can establish simple network connections and access the same storage volumes. Since they’re all in the same pod, Kubernetes treats them as a single drive and simplifies their management.

Benefits of using Kubernetes

pods

Pod design is one of the main reasons Kubernetes is gaining popularity as a container orchestrator. By employing pods, Kubernetes can improve container performance, limit resource consumption, and ensure continuity of deployment.

Here are some of the crucial benefits of Kubernetes pods:

  • Container abstraction. Since a pod is an abstraction layer for the containers it hosts, Kubernetes can treat those containers as a single unit within the cluster, simplifying container management.
  • Resource sharing. All containers in a single pod share the same network namespace. This property ensures that they can communicate through localhost, which significantly simplifies networking. In addition to sharing a network, pod containers can also share storage volumes, a particularly useful feature for managing stateful applications.
  • Load balancing. Pods can be replicated across the cluster, and a load balancing service can balance traffic between replicas. Kubernetes load balancing is an easy way to expose an application to external network traffic.
  • Scalability. Kubernetes can automatically increase or decrease the number of pod replicas depending on default factors. This feature allows you to adjust the system to scale up or down depending on the workload.
  • Health monitoring. The system periodically checks the status of the pods and reboots. In addition, Kubernetes reprograms pods that crash or are unhealthy. Automatic health monitoring is an important factor in maintaining application uptime.

How do pods work?

Pods run according to a set of rules defined within your Kubernetes cluster and the configuration provided when creating the object that generated them. The following sections provide an overview of the most important concepts related to the life of a pod.

Lifecycle

The lifecycle of a pod depends on its purpose in the cluster and the

Kubernetes object that created it.

Kubernetes objects, such as jobs and cronjobs, create pods that terminate after completing a task (for example, reporting or backup). On the other hand, objects such as deployments, replica sets, daemon sets, and stateful sets generate pods that run until manually interrupted by the user.

The state of a pod at any given stage of its life cycle is called the sheath phase. There are five possible phases

of the pod: Pending: When a pod shows the pending state, it means that Kubernetes accepted it and that the

  • containers that compose it are preparing to run. Running: The running state means that Kubernetes
  • completed the container configuration and assigned the pod to a node. At least one container must be starting, restarting, or running for status to be displayed.
  • Success

  • : Once a pod completes a task (for example, performing a work-related operation), it ends with the OK state. This means that it stopped working and will not restart.
  • Failed: The failed state informs the user that one or more containers in the pod ended up in a nonzero state (i.e. with an error)
  • Unknown – The unknown state of the pod usually indicates a problem with the connection to the node on which the pod is running.

Apart from the phases, the pods also have conditions. Possible condition types are PodScheduled, Ready, Initialized, and Unschedulable. Each type has three possible states: true, false, or unknown.

Kubernetes Logs collects logs from containers running inside a pod. While each container runtime has a custom way of handling and redirecting log output, integration with Kubernetes follows the standardized CRI log format.

Users can configure Kubernetes to rotate container records and manage the log directory automatically. Logs can be retrieved using a dedicated Kubernetes API function, accessible via the kubectl logs command.

Controllers

Controllers are Kubernetes objects that create pods, monitor their health and number, and perform management actions. This includes rebooting and terminating pods, creating new pod replicas, etc.

The daemon named Controller Manager is in charge of managing the controllers. It uses control loops to monitor cluster health and communicates with the API server to make necessary changes.

The following is the list of the six most important Kubernetes drivers:

  • ReplicaSet. Create a set of pods to run the same workload.
  • Deployment. Creates a configured ReplicaSet and provides additional update and rollback configurations.
  • DaemonSet. Controls which nodes are in charge of running a pod.
  • StatefulSet. Manage stateful applications and create persistent storage and pods whose names persist on reboots.
  • work. Create pods that finish successfully after completing a task.
  • CronJob. A CronJob helps schedule jobs.

Templates

In their YAML configurations, Kubernetes controllers have specifications called pod templates. The templates describe which containers and volumes a pod should run. Controllers use templates whenever they need to create new pods.

Users update pod settings by changing the parameters specified in a controller’s PodTemplate field.

Networking

Each pod in a Kubernetes cluster receives a unique cluster IP address. Containers within that pod share this address, along with the network namespace and ports. This configuration allows them to communicate using localhost.

If a container in one pod

wants to communicate with a container in another pod in the cluster, it must use IP networks. The pods feature a virtual ethernet connection that connects to the virtual ethernet device on the node and creates a tunnel for the pod network within the node.

Storage

Pod data is stored in volumes, storage directories accessible by all containers within the pod. There are two main types

of storage volumes:

  • Persistent volumes persist through pod errors. The PersistentVolume subsystem manages the lifecycle of volumes and is independent of the lifecycle of related pods.
  • Ephemeral volumes are destroyed along with the sheath that used them.

The user specifies the volumes that the pod should use in a separate YAML file.

Working

with Kubernetes pods Users

interact with pods using kubectl, a set of commands to control Kubernetes clusters by forwarding HTTP requests to the API

.

The following sections list some of the most common pod management operations

.

Pod OS

users

can configure the operating system on which a pod should run. Currently, Linux and Windows are the only two supported operating systems.

Specify the operating system (Linux or Windows) for the pods in the field. spec.os.name of the YAML statement. Kubernetes will not run pods on nodes that do not meet the Pod OS criteria.

Create pods directly

While creating pods directly from the command line is useful for testing purposes, it is not a best practice

. To manually create a pod, use the kubectl run command

: kubectl run

[pod-name] -image=[container-image] -restart=Never The -restart=Never

option prevents the pod from continuously attempting to restart, which would cause a crash loop. The following example shows an Nginx pod created with kubectl run.

Deploy

pods

The recommended way to create pods is through workload resources (deployments, replica sets, and so on). For example, the following YAML file creates an Nginx deployment with five pod replicas. Each pod has a single container running the latest nginx image.

apiVersion: apps/v1 kind: Implementation metadata: name: nginx spec: selector: matchLabels: app: nginx replicas: 5 template: metadata: labels: app: nginx spec: containers: – name: nginx image: nginx:latest ports: – containerPort: 80 To create pods from the

YAML file, use the kubectl create: kubectl create

-f [yaml-file] command

Update or replace pods

Some pod specifications, such as metadata and names, are immutable after Kubernetes creates a pod. To make changes, you must modify the pod template and create new pods with the desired features.

List of pods

View the available pods using the following command:

kubectl get pod

The result displays the list of pods in the current namespace, along with their status, age, and other information

.

There

is no direct way to restart a pod in Kubernetes using kubectl. However, there are three workarounds available:

  • Gradual restart is the fastest method available. Kubernetes performs a step-by-step shutdown and restart of each container in a deployment.
  • Changing an environment variable forces pods to restart and synchronize with the changes.
  • Scale the replicas to zero, then return to the desired number.

Delete

pods

Kubernetes automatically deletes pods after they complete their lifecycle. Each pod that is removed is given 30 seconds to finish successfully.

You can also delete a pod via the command

line by passing the YAML file containing the pod specifications to kubectl delete: kubectl delete -f [yaml-file] This command overrides the grace period for pod termination and immediately deletes the pod from the cluster. View pod logs The kubectl logs command allows the user to view logs for a specific pod. kubectl

logs [pod-name]

Assign

pods

to

nodes Kubernetes automatically decides which nodes will host which

pods

based on the specification provided when creating the workload resource. However, there are two ways a user can influence the choice of

a node:

  • Using the nodeSelector field in the YAML file allows you to select specific nodes
  • .

  • Creating a DaemonSet resource provides a way to overcome programming limitations and ensure that a specific application is deployed to all nodes in the cluster.

Monitor pods

Collecting data from individual pods is useful for getting a clear picture of cluster health. Essential pod data to monitor includes:

  • Total pod instances. This parameter helps ensure high availability.
  • The actual number of pod instances

  • versus the expected number of pod instances. It helps create resource redistribution tactics.
  • Pod deployment status. Identify misconfigurations and problems with the distribution of pods to nodes.

Conclusion

This article provided a comprehensive overview of Kubernetes pods for novice users of this popular orchestration platform. After reading the article, you should know what pods are, how they work, and how they are managed.

Contact US