ami-05576a079321f21f8
containerd/config.toml
)
10255
) is still being utilized. The following setup does NOT include the old insecure port. It is highly recommended that customers upgrade to K8S v1.30 or higher which includes the new secure port (10250
) by default.
config.toml
:
sudo su -
) (since it involves installation of Kubernetes).
Run below command to initialize and setup Kubernetes master.
192.168.100.0/19
, will allow for up to 32 nodes (flannel reserves one subnet for each node). If more nodes are required, need to increase netmask/subnet size.
CNI and service CIDRs can be modified to match respective network requirements:
kubectl
commands, need to update kubeconfig (regular or root user):
NotReady
until Flannel is installed.sudo su -
) (since it involves installation of Kubernetes).
Run following command to join the Kubernetes cluster created in the previous section:
NotReady
until Flannel is installed.
sudo su -
) (since it involves installation of Kubernetes).
Run following command to join the Kubernetes cluster created in the previous section:
STATUS
will be NotReady
until Flannel is installed.lilt-registry.local.io:80
sudo su -
).
Move to the root directory:
<release-tag>
):sudo su -
).
Move to the root directory:
<release-tag>
):
sudo su -
).
Move to the root directory:
<release-tag>
):nodeSelector
for scheduling pods on specific nodes. This is a simple way to control where pods are scheduled by adding a key-value pair to chart/manifest specifications.
Master Node
Login to the k8s-master node and follow the below steps. Note that the following steps have to be performed as root (sudo su -
).
Worker node is used for running the bulk of application workloads, typically separate from the master and GPU nodes. Label this node as worker
(node name set in the above section) by executing the following command:
gpu
(node name set in the above section) by executing the following command:
node-type
label from the gpu
node
sudo su -
).
sudo su -
).
Modify flannel on-prem-values.yaml
:
podCidr
to match k8s CNI_CIDR
set above:
json
key use this document Create imagePullSecrets
Master Node
Login to the k8s-master node and follow the below steps. Note that the following steps have to be performed as root (sudo su -
).
Create cluster image pull secret:
on-prem-values.yaml
. Update these files with the new imagePullSecret
.
As an example, this is from redis
custom values:
nginx-ingress
running on the worker node.
If accessing from a workstation inside the cluster network, get the local ip address of the worker
node:
worker
node
Once the correct ip address is determined, modify the hosts
file on your local workstation:
worker
node ip address and bare.lilt.com:
kubectl delete pod -n lilt <podname>
flannel
does not load correctly on all nodes. Verify which node that the flannel
init container does not complete and restart containerd
:
nginx-ingress
needs to be restarted to include new values for front
. Try restarting the nginx-ingress
daemonset:
core
pods may be stuck in CrashLoopBackOff
. A potential fix is to delete the elasticsearch
helm deployment and PVCs and then re-deploy elasticsearch
:
lilt-beehive
deployment:
sh install_scripts/install-nvidia-device-plugin.sh
rabbitmq
statefulset and minio
deployment: