A DaemonSet ensures that all (or some) nodes run a copy of a Pod. As nodes are added to the cluster, pods are added to them. As nodes are removed from the cluster, those Pods are collected unused items. Deleting a DaemonSet will clean up the Pods you created.
Some typical uses of a DaemonSet are:
Run a cluster storage daemon on each node Run a log collection daemon on each node Run a node monitoring daemon
- on each node In a simple case, a DaemonSet, covering all nodes, would be used for
- each type of
. A more complex configuration might use multiple DaemonSets for a single daemon type, but with different flags and/or different memory and CPU requests for different types of hardware.
Write a specification
DaemonSet Creating a DaemonSet You can describe a
in a YAML file. For example, the following daemonset.yaml file describes a DaemonSet that runs the Docker fluentd-elasticsearch image
:Create a DaemonSet
based on the YAML file:
kubectl apply -f
As with all other Kubernetes configurations, a DaemonSet needs apiVersion, kind, and metadata fields. For general information about working with configuration files, see Running stateless applications and managing objects with kubectl.
The name of a DaemonSet
object must be a valid DNS subdomain name
A DaemonSet also needs a
template The .spec.template
is one of the required fields in .spec.
spec.template is a pod template. It has exactly the same schema as a Pod, except that it is nested and does not have an apiVersion or type.
In addition to the required fields for a Pod, a Pod template in a DaemonSet has to specify the appropriate tags (see pod selector).
A pod template in a DaemonSet must have a RestartPolicy equal to Always, or unspecified, which by default is Always.
The .spec.selector field is a pod selector. It works just like the .spec.selector of a job.
You must specify a pod selector that matches the tags in .spec.template. Also, once a DaemonSet is created, its .spec.selector cannot be mutated. The pod selector mutation can lead to unintentional orphanhood of Pods, and was found to be confusing for users.
The .spec.selector is an
object that consists of two fields
- matchLabels – it works just like the .spec.selector of a ReplicationController. matchExpressions
- : Allows you to create more sophisticated selectors by specifying key, list of values, and an operator that relates the key and values
When both are specified, the result is ANDed.
The .spec.selector must match .spec.template.metadata.labels. Configuration with these two that do not match will be rejected by the API.
pods on selected nodes
If you specify a .spec.template.spec.nodeSelector, the DaemonSet controller will create pods on nodes that match that node selector. Similarly, if you specify a .spec.template.spec.affinity, the DaemonSet controller will create Pods on nodes that match that node affinity. If you do not specify neither, the DaemonSet controller will create Pods on all nodes.
How Daemon Pods are programmed
A DaemonSet ensures that all eligible nodes run a copy of a Pod. The DaemonSet controller creates a Pod for each eligible node and adds the spec.affinity.nodeAffinity field of the Pod to match the target host. After the Pod is created, the default scheduler typically takes control and then binds the Pod to the target host by setting the .spec.nodeName field. If the new Pod does not fit on the node, the default scheduler can advance (dislodge) some of the existing Pods based on the priority of the new Pod.
The user can specify a different scheduler for the DaemonSet Pods by setting the .spec.template.spec.schedulerName field of the DaemonSet.
The DaemonSet driver takes into account the original node affinity specified in the .spec.template.spec.affinity.nodeAffinity field (if specified) when evaluating eligible nodes, but is replaced in the created pod with the node affinity that matches the name of the eligible node.
tolerances The DaemonSet controller
automatically adds a set of tolerances
to DaemonSet Pods:Tolerations for DaemonSet podsToleration keyEffectDetailsnode.kubernetes.io/not-readyNoExecuteDaemonSet Pods can be programmed on nodes that are unhealthy or ready to accept Pods. Any DaemonSet Pods running on such nodes will not be evicted.node.kubernetes.io/unreachableNoExecuteDaemonSet Pods can be programmed into nodes that cannot be accessed from the node controller. Any DaemonSet Pods running on such nodes will not be evicted.node.kubernetes.io/disk-pressureNoScheduleDaemonSet Pods can be programmed on nodes with disk pressure issues.node.kubernetes.io/memory-pressureNoScheduleDaemonSet Pods can be programmed on nodes with memory pressure issues.node.kubernetes.io/pid-pressureNoScheduleDaemonSet Pods can be programmed on nodes with process pressure issues.node.kubernetes.io/unschedulableNoScheduleDaemonSet Pods can be programmed on nodes with disk pressure can program on nodes that are unschedulable.node.kubernetes.io/network-unavailableNoScheduleIt was only added for DaemonSet Pods that request host networks, i.e. Pods that have spec.hostNetwork: true. Such DaemonSet Pods can be programmed on nodes with unavailable networking.
You can also add your own tolerances to the Pods of a DaemonSet by defining them in the DaemonSet Pod template.
Because the DaemonSet
controller sets node.kubernetes.io/unschedulable:NoSchedule tolerance automatically, Kubernetes can run DaemonSet Pods on nodes that are marked as non-programmable.
If you use a DaemonSet to
provide an important node-level function, such as cluster networking, it is useful for Kubernetes to place DaemonSet Pods on the nodes before they are ready. For example, without that special tolerance, you could end up in a deadlock situation where the node is not marked as ready because the network plug-in is not running there, and at the same time the network plug-in is not running on that node because the node is not ready yet.
Communication with Daemon
Some possible patterns for communicating with Pods
in a DaemonSet are:
- Push: Pods in the DaemonSet are configured to send updates to another service, such as a statistics database. They have no customers.
- NodeIP and known port: The pods in the DaemonSet can use a hostPort, so that the pods are accessible through the node’s IPs. Clients know the node IP list in some way and know the port by convention.
- DNS: Create a headless service with the same pod selector, and then discover DaemonSets using the endpoint resource or retrieve multiple DNS A records.
- Service: Create a service with the same pod selector and use the service to reach a daemon on a random node. (There is no way to reach a specific node.)
If node labels are changed, the DaemonSet will quickly add Pods to newly matched nodes and remove Pods from recently mismatched nodes. You can modify the Pods that a DaemonSet creates.
However, pods do not allow you to update all fields. In addition, the DaemonSet controller will use the original template the next time a node is created (even with the same name).
You can delete a DaemonSet. If you specify -cascade=orphan with kubectl, the Pods will be left on the nodes. If you later create a new DaemonSet with the same selector, the new DaemonSet adopts the existing Pods. If any Pods need to replace the DaemonSet it replaces them according to its updateStrategy.
You can perform a continuous update on a DaemonSet.
It is certainly possible to run daemon processes by starting them directly on a node (for example, using init, upstartd or systemd). This is perfectly fine. However, there are several advantages of running such processes through a DaemonSet:
- Ability to monitor and manage logs for daemons in the same way as applications.
- The same configuration language and tools (e.g. pod templates, kubectl) for daemons and applications.
- in containers with resource limits increases isolation between daemons in application containers. However, this can also be achieved by running the daemons in a container but not in a Pod.
You can create Pods directly that specify a particular node on which to run. However, a DaemonSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel update. For this reason, you should use a DaemonSet instead of creating individual Pods.
It is possible to create Pods by writing a file to a certain directory monitored by Kubelet. These are called static pods. Unlike DaemonSet, static pods cannot be managed with kubectl or other Kubernetes API clients. Static pods do not depend on the apiserver, which makes them useful in cluster boot cases. In addition, static Pods may become obsolete in the future.
Deployment DaemonSets are similar to deployments in that both create pods and those pods have processes that are not expected to finish (e.g., web servers, storage servers).
Use a deployment for stateless services, such as frontends, where scaling and reducing the number of replicas and deploying updates is more important than controlling exactly which host the pod is running on. Use a DaemonSet when it is important that a copy of a Pod always runs on all or certain hosts, if the DaemonSet provides node-level functionality that allows other Pods to run successfully on that particular node.
For example, network plugins often include a component that runs as a DaemonSet. The DaemonSet component ensures that the node where it runs has a working cluster network.
- Learn more about pods.
- Learn about static pods, which are useful for running Kubernetes control plane components.
- Learn how to use DaemonSets Perform a continuous update on a DaemonSet
- Perform a
- rollback on a DaemonSet (for example, if a deployment did not work as expected).
- assigns pods to nodes
- Learn about device plug-ins and plug-ins, which often run as DaemonSets. DaemonSet
- is a top-level resource in the Kubernetes REST API. Read the DaemonSet object definition to understand the API for daemon sets.
Understand how Kubernetes