nodeSelector
for scheduling pods on specific nodes. This is a simple way to control where pods are scheduled by adding a key-value pair to a chart or manifest specification.
There are generally three type on nodes in a cluster: main, worker, and GPU.
Main Node
By default, main nodes (control-plane) are tainted with specificationnode-role.kubernetes.io/control-plane:NoSchedule
. This prevents any non-essential cluster workloads from being scheduled.
If the main node has sufficient resources to handle additional workloads or the installation environment is constrained by the number of available nodes, this taint can be removed:
worker
as described in the next section.
Allowing workloads to be scheduled on the main node is at the discretion of the cluster admin. This will not negatively affect cluster performance in a low-mid level operations.
Worker Node
This node is used for running the bulk of application workloads, typically separate from the main and GPU nodes. Label this node asworker
by executing the following command from a main (control-plane) node:
GPU Node
This node is generally used for running GPU intensive pods/applications aside from the worker node; however, this node can also be used for additional workloads if the instance has sufficient resources to handle GPU and application tasks. As with the main node, this decision is up to the cluster admin to determine if GPU nodes can also handle application workloads. If only using this node for GPU specific tasks, label asgpu
by executing the following from the main (control-plane) node:
NOTE: nodes must contain the respective pod images. This can be achieved by manually importing images to containerd on each node or using a centralized repository.
Helm Charts / Manifest
Each helm chart and/or manifest yaml must include propernodeSelector: key:value
for pods to schedule.
Example for worker node only:
nodeSelector
can include two node labels.
NOTE: if two
key:value
are designated for nodeSelector
, then BOTH labels MUST be present on the node. Only use the following example if the true intent is to specifically limit pods to a certain node.