Lilt cluster utilizes nodeSelector for scheduling pods on specific nodes. This is a simple way to control where pods are scheduled by adding a key-value pair to a chart or manifest specification. There are generally three type on nodes in a cluster: main, worker, and GPU.

Main Node

By default, main nodes (control-plane) are tainted with specification node-role.kubernetes.io/control-plane:NoSchedule. This prevents any non-essential cluster workloads from being scheduled. If the main node has sufficient resources to handle additional workloads or the installation environment is constrained by the number of available nodes, this taint can be removed:
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
The node can then be labeled as worker as described in the next section. Allowing workloads to be scheduled on the main node is at the discretion of the cluster admin. This will not negatively affect cluster performance in a low-mid level operations.

Worker Node

This node is used for running the bulk of application workloads, typically separate from the main and GPU nodes. Label this node as worker by executing the following command from a main (control-plane) node:
> kubectl label nodes <node name> node-type=worker
Verify output
# Show labels
kubectl get nodes --show-labels

GPU Node

This node is generally used for running GPU intensive pods/applications aside from the worker node; however, this node can also be used for additional workloads if the instance has sufficient resources to handle GPU and application tasks. As with the main node, this decision is up to the cluster admin to determine if GPU nodes can also handle application workloads. If only using this node for GPU specific tasks, label as gpu by executing the following from the main (control-plane) node:
> kubectl label nodes <node name> capability=gpu
Optional: if the GPU node has sufficient resources to run additional workloads, also add worker label:
> kubectl label nodes <node name> capability=gpu node-type=worker
Verify output
# Show labels
kubectl get nodes --show-labels
NOTE: nodes must contain the respective pod images. This can be achieved by manually importing images to containerd on each node or using a centralized repository.

Helm Charts / Manifest

Each helm chart and/or manifest yaml must include proper nodeSelector: key:value for pods to schedule. Example for worker node only:
nodeSelector:
  node-type: worker
Example for GPU node only:
nodeSelector:
  capability: gpu
The previous two examples are sufficient for scheduling all nodes for the Lilt app; however, in very specific circumstances nodeSelector can include two node labels.
NOTE: if two key:value are designated for nodeSelector, then BOTH labels MUST be present on the node. Only use the following example if the true intent is to specifically limit pods to a certain node.
This example will only schedule pods on a node that contains BOTH labels:
nodeSelector:
  node-type: worker
  capability: gpu