Overview
By default, LILT comes with MinIO enabled. However, if an environment has access to S3 directly, this is the recommended approach. This document covers configuring LILT to use S3 instead of MinIO—it does not cover how to transition an existing MinIO deployment to S3. Please work with your program manager to facilitate this process. There are two options for authentication with s3 depending on the type of cluster, managed or unmanaged:- Unmanaged: Identity and Access Management (IAM) that utilizes an AWS access and secret key
- Managed (EKS): IAM Roles for Service Accounts (IRSA) that attaches a service account to a pod
Uninstall MinIO
There is currently no mechanism in LILT to automatically remove MinIO from a deployment. This is a feature we are working to implement by the end of 2025. In the meantime, MinIO may be removed directly by:Copy
Ask AI
helm uninstall minio
Configure LILT
Update Secrets
LILT ships with a default AWS credential. This needs to be removed. Editlilt/environments/lilt/secrets.yaml and remove the following line:
Copy
Ask AI
front:
onpremValues:
config:
front_secret_values:
s3secretkey: testsecretkey # Only remove this line
Option 1: Unmanaged IAM
Update Values
If you plan to use Cloud Upload (direct upload to S3) create an SQS queue
Ref: https://us-east-1.console.aws.amazon.com/sqs/v3/home?#/create-queueYou can keep all as default but attach a policy, you’ll need to modify it once
the S3 Bucket is created in the next step.
Sample Policy:
Copy
Ask AI
{
"Version": "2012-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__owner_statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT>:root"
},
"Action": "SQS:*",
"Resource": "arn:aws:sqs:<AWS_REGION>:<AWS_ACCOUNT>:<QUEUE_NAME>"
}
]
}
Create an S3 bucket in case you don’t have one
Ref: https://us-east-1.console.aws.amazon.com/s3/bucket/create Keep note of the name for later use.Attach the following CORS policy to the bucket
Copy
Ask AI
[
{
"AllowedOrigins": ["https://yourdomain.com"],
"AllowedMethods": ["GET", "PUT", "POST", "HEAD"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3000
}
]
Update SQS Access policy (if needed) to allow the S3 bucket to write on the SQS queue
Ref: https://us-east-1.console.aws.amazon.com/sqs/v3/home?#/queues Access your queue and click on Access policy Edit button. Add the following block to the statements:Copy
Ask AI
{
"Sid": "AllowS3SendMessage",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "SQS:SendMessage",
"Resource": "arn:aws:sqs:<AWS_REGION>:<AWS_ACCOUNT>:<QUEUE_NAME>",
"Condition": {
"ArnEquals": {
"aws:SourceArn": ["arn:aws:s3:::<BUCKET_NAME>"]
}
}
}
Create an IAM Policy:
Ref: https://us-east-1.console.aws.amazon.com/iam/home?#/policies/create Replace the fields<...> with the names you got in the previous stepsKeep note of policy name for later use.
Copy
Ask AI
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<BUCKET_NAME>",
"arn:aws:s3:::<BUCKET_NAME>/*"
]
},
{
"Effect": "Allow",
"Action": [
"sqs:SendMessage",
"sqs:ReceiveMessage",
"sqs:DeleteMessage",
"sqs:GetQueueAttributes"
],
"Resource": "arn:aws:sqs:<AWS_REGION>:<AWS_ACCOUNT>:<QUEUE_NAME>"
}
]
}
Create an AWS user for the workload
Ref: https://us-east-1.console.aws.amazon.com/iam/home?#/users/create Keep note of the username you generate for later use.Attach the policy generated in the previous step
Collect the following information for your bucket:
Copy
Ask AI
AWS_ACCESS_KEY: <AWS_ACCESS_KEY>
AWS_SECRET_KEY: <AWS_SECRET_KEY>
AWS_BUCKET: <AWS_BUCKET>
AWS_REGION: <AWS_REGION>
AWS_S3_ENDPOINT_URL: <AWS_S3_ENDPOINT_URL>
AWS_S3_ENDPOINT_URL_SSL: <AWS_S3_ENDPOINT_URL_SSL>
UPLOADER_SUBSCRIPTION_NAME: <QUEUE_URL>
Copy
Ask AI
AWS_S3_ENDPOINT_URL: http://s3.us-east-1.amazonaws.com
AWS_S3_ENDPOINT_URL_SSL: https://s3.us-east-1.amazonaws.com
Remember to change the region in the example URLs (e.g., us-east-1) if your S3 bucket is in a different AWS region.
Edit your override values at lilt/environments/lilt/values.yaml.
Add all the following values, replacing values within <> with the values documented from above:
Copy
Ask AI
AWS_ACCESS_KEY: &AWS_ACCESS_KEY <AWS_ACCESS_KEY>
AWS_SECRET_KEY: &AWS_SECRET_KEY <AWS_SECRET_KEY>
AWS_BUCKET: &AWS_BUCKET <AWS_BUCKET>
AWS_REGION: &AWS_REGION <AWS_REGION>
AWS_S3_ENDPOINT_URL: &AWS_S3_ENDPOINT_URL <AWS_S3_ENDPOINT_URL>
# Compound strings, please replace <AWS_BUCKET> with the bucket name and keep the rest of the text
S3_WPA_PATH: &S3_WPA_PATH "s3://<AWS_BUCKET>/wpa/"
S3_TRAINED_DATA_PATH: &S3_TRAINED_DATA_PATH "s3://<AWS_BUCKET>/trained/"
S3_TESSERACT_PATH: &S3_TESSERACT_PATH "s3://<AWS_BUCKET>/tesseract/"
S3_MINIO_ENDPOINT: &S3_MINIO_ENDPOINT s3://<AWS_BUCKET>
USE_CLOUD_UPLOAD: &USE_CLOUD_UPLOAD true
UPLOADER_SUBSCRIPTION_NAME: &UPLOADER_SUBSCRIPTION_NAME "https://sqs.<AWS_REGION>.amazonaws.com/<AWS_ACCOUNT>/<QUEUE_NAME>"
UPLOADER_STORAGETYPE: &UPLOADER_STORAGETYPE s3
UPLOADER_EVENT_SOURCE_TYPE: &UPLOADER_EVENT_SOURCE_TYPE sqs
UPLOADER_MAX_FILE_SIZE_DEFAULT: &UPLOADER_MAX_FILE_SIZE_DEFAULT "2147483648"
AV_SCANNER_HTTP_TIMEOUT_MS: &AV_SCANNER_HTTP_TIMEOUT_MS "3600"
### ---- NO NEED TO MODIFY BELOW ---- ###
front:
onpremValues:
env:
USE_CLOUD_UPLOAD: *USE_CLOUD_UPLOAD
UPLOADER_SUBSCRIPTION_NAME: *UPLOADER_SUBSCRIPTION_NAME
UPLOADER_EVENT_SOURCE_TYPE: *UPLOADER_EVENT_SOURCE_TYPE
AWS_SQS_ACCESSKEY: *AWS_ACCESS_KEY
AWS_SQS_SECRETKEY: *AWS_SECRET_KEY
UPLOADER_MAX_FILE_SIZE_DEFAULT: *UPLOADER_MAX_FILE_SIZE_DEFAULT
AV_SCANNER_HTTP_TIMEOUT_MS: *AV_SCANNER_HTTP_TIMEOUT_MS
MM_BUCKET: *AWS_BUCKET
config:
front_secret_values:
awsSecretAccessKey: *AWS_SECRET_KEY
s3secretkey: *AWS_SECRET_KEY
front_configmap_values:
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
s3-region: *AWS_REGION
awsRegion: *AWS_REGION
s3accesskey: *AWS_ACCESS_KEY
awsS3Bucket: *AWS_BUCKET
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
awsS3EndpointUrlPublic: *AWS_S3_ENDPOINT_URL_SSL
storagetype: *UPLOADER_STORAGETYPE
lexicon:
onpremValues:
app:
bucketname: *AWS_BUCKET
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
s3-region: *AWS_REGION
converter:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
s3-privatekey: *AWS_SECRET_KEY
s3-accesskey: *AWS_ACCESS_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
app:
bucketname: *AWS_BUCKET
args:
s3-endpoint: *AWS_S3_ENDPOINT_URL_SSL
s3-region: *AWS_REGION
bucket: *AWS_BUCKET
assignment:
onpremValues:
app:
bucketname: *AWS_BUCKET
auditlog:
onpremValues:
app:
bucketname: *AWS_BUCKET
batch-tb:
onpremValues:
app:
bucketname: *AWS_BUCKET
tb:
onpremValues:
app:
bucketname: *AWS_BUCKET
tm:
onpremValues:
app:
bucketname: *AWS_BUCKET
args:
s3-endpoint: *AWS_S3_ENDPOINT_URL_SSL
s3-region: *AWS_REGION
bucket: *AWS_BUCKET
indexer:
onpremValues:
app:
bucketname: *AWS_BUCKET
linguist:
onpremValues:
app:
bucketname: *AWS_BUCKET
memory:
onpremValues:
app:
bucketname: *AWS_BUCKET
qa:
onpremValues:
app:
bucketname: *AWS_BUCKET
search:
onpremValues:
app:
bucketname: *AWS_BUCKET
segment:
onpremValues:
app:
bucketname: *AWS_BUCKET
tag:
onpremValues:
app:
bucketname: *AWS_BUCKET
watchdog:
onpremValues:
app:
bucketname: *AWS_BUCKET
job:
onpremValues:
app:
bucketname: *AWS_BUCKET
args:
s3-endpoint: *AWS_S3_ENDPOINT_URL_SSL
s3-region: *AWS_REGION
bucket: *AWS_BUCKET
workflow:
onpremValues:
app:
bucketname: *AWS_BUCKET
args:
s3-endpoint: *AWS_S3_ENDPOINT_URL_SSL
s3-region: *AWS_REGION
bucket: *AWS_BUCKET
file-translation:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
s3-privatekey: *AWS_SECRET_KEY
s3-accesskey: *AWS_ACCESS_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
app:
bucketname: *AWS_BUCKET
args:
s3-endpoint: *AWS_S3_ENDPOINT_URL_SSL
s3-region: *AWS_REGION
bucket: *AWS_BUCKET
s3-accesskey: *AWS_ACCESS_KEY
s3-privatekey: *AWS_SECRET_KEY
file-job:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
s3-privatekey: *AWS_SECRET_KEY
s3-accesskey: *AWS_ACCESS_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
extraInitContainer:
BUCKETNAME: *AWS_BUCKET
app:
bucketname: *AWS_BUCKET
args:
s3-endpoint: *AWS_S3_ENDPOINT_URL_SSL
s3-region: *AWS_REGION
bucket: *AWS_BUCKET
dataflow:
onpremValues:
env:
MINIO_ENDPOINT: *S3_MINIO_ENDPOINT
MINIO_ACCESS_KEY: *AWS_ACCESS_KEY
MINIO_SECRET_KEY: *AWS_SECRET_KEY
MINIO_REGION: *AWS_REGION
translatev4:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
init:
bucket: *AWS_BUCKET
outputPath: *S3_TRAINED_DATA_PATH
updatev4:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
wpaDataStoreBasePath: *S3_WPA_PATH
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
langid:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
update-managerv4:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
routing:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
batchv4:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
batch-worker-gpuv4:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
llm-inference:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
init:
bucket: *AWS_BUCKET
outputPath: *S3_TRAINED_DATA_PATH
automqm:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
alignment:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
init:
bucket: *AWS_BUCKET
outputPath: *S3_TRAINED_DATA_PATH
tag-projection:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
init:
bucket: *AWS_BUCKET
outputPath: *S3_TRAINED_DATA_PATH
nncache:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
bucket: *AWS_BUCKET
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
# Gemma3 inference
gemma-vllm-inference:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
storageType: s3
bucket: *AWS_BUCKET
init:
outputPath: *S3_TRAINED_DATA_PATH
# Emma-500 inference
emma-500-vllm-inference:
onpremValues:
config:
awsSecretAccessKey: *AWS_SECRET_KEY
awsAccessKeyId: *AWS_ACCESS_KEY
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
storageType: s3
bucket: *AWS_BUCKET
init:
outputPath: *S3_TRAINED_DATA_PATH
Run the installer as usual
Copy
Ask AI
./install-lilt.sh
Enable the feature for the organization
Log in to the front pod and enable the Cloud Upload feature.This is a one-time enable per ORG_ID.
Default ORG_ID for a new installation is 1.
Copy
Ask AI
kubectl exec -it -n lilt $(kubectl get pod -n lilt -lapp=front -o name) -- \
npm run dist-admin-cli set-org-setting -o <ORG_ID> -s enableDirectToCloudUploads -v true
Option 2: Managed (EKS) IRSA
Create AWS IAM Policy and Role
In the AWS console or via terraform, create IAM role and policy for accessing the s3 bucket and then attach them to each other. The following assumes that you are using an OIDC provider.Copy
Ask AI
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<my-bucket-name>",
"arn:aws:s3:::<my-bucket-name>/*"
]
}
]
}
Copy
Ask AI
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"<OIDC_PROVIDER>:sub": "system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>"
}
}
}
]
}
arn:aws:iam::<ACCOUNT_ID>:role/<role_name>
Create Service Account
Either via terraform or kubectl, create a service account for use by the pods.Example using kubectl:
Copy
Ask AI
kubectl create serviceaccount <service-account-name> -n <your-namespace>
kubectl annotate serviceaccount <service-account-name> \
-n <your-namespace> \
eks.amazonaws.com/role-arn=<your-arm>
Update RBAC
Pods can only reference ONE service account. This overrides any default cluster and pod permissions so any additional requirements must be included within the same RBAC roles.For example,
neural requires access to pods and deployments, dataflow requires access to argo, etc…
Create a role yaml and apply it to the cluster:
Copy
Ask AI
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: lilt-batch4-neural-role
namespace: <your-namespace>
rules:
- apiGroups: [""]
resources: ["deployments", "deployments/scale", "jobs", "pods", "pods/status", "pods/log", "configmaps", "secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments/scale"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: lilt-argo-workflows-server-role
namespace: <your-namespace>
rules:
- apiGroups: [""]
resources: ["configmaps", "events"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "patch"]
- apiGroups: ["argoproj.io"]
resources:
- "eventsources"
- "sensors"
- "workflows"
- "workfloweventbindings"
- "workflowtemplates"
- "cronworkflows"
verbs: ["create", "get", "list", "watch", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: lilt-argo-workflows-workflow-role
namespace: <your-namespace>
rules:
- apiGroups: ["argoproj.io"]
resources: ["workflowtaskresults"]
verbs: ["create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: lilt-argo-workflows-workflow-controller-role
namespace: <your-namespace>
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create", "get", "list", "watch", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumeclaims/finalizers"]
verbs: ["create", "update", "delete", "get"]
- apiGroups: ["argoproj.io"]
resources: ["workflows", "workflows/finalizers", "workflowtasksets", "workflowtasksets/finalizers", "workflowartifactgctasks"]
verbs: ["get", "list", "watch", "update", "patch", "delete", "create"]
- apiGroups: ["argoproj.io"]
resources: ["workflowtemplates", "workflowtemplates/finalizers"]
verbs: ["get", "list", "watch"]
- apiGroups: ["argoproj.io"]
resources: ["workflowtaskresults", "workflowtaskresults/finalizers"]
verbs: ["list", "watch", "deletecollection"]
- apiGroups: ["argoproj.io"]
resources: ["cronworkflows", "cronworkflows/finalizers"]
verbs: ["get", "list", "watch", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get", "list"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["create", "get", "delete"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create"]
- apiGroups: ["coordination.k8s.io"]
resourceNames: ["workflow-controller", "workflow-controller-lease"]
resources: ["leases"]
verbs: ["get", "watch", "update", "patch", "delete"]
- apiGroups: [""]
resourceNames: ["argo-workflows-agent-ca-certificates"]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: lilt-argo-workflows-namespace-access
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: lilt-argo-workflows-clusterworkflowtemplates-access
rules:
- apiGroups: ["argoproj.io"]
resources: ["clusterworkflowtemplates", "clusterworkflowtemplates/finalizers"]
verbs: ["get", "list", "watch"]
Copy
Ask AI
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: bind-lilt-batch4-neural-role
namespace: <your-namespace>
subjects:
- kind: ServiceAccount
name: <your-service-account>
namespace: <your-namespace>
roleRef:
kind: Role
name: lilt-batch4-neural-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: bind-lilt-argo-workflows-server-role
namespace: <your-namespace>
subjects:
- kind: ServiceAccount
name: <your-service-account>
namespace: <your-namespace>
roleRef:
kind: Role
name: lilt-argo-workflows-server-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: bind-lilt-argo-workflows-workflow-role
namespace: <your-namespace>
subjects:
- kind: ServiceAccount
name: <your-service-account>
namespace: <your-namespace>
roleRef:
kind: Role
name: lilt-argo-workflows-workflow-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: bind-lilt-argo-workflows-workflow-controller-role
namespace: <your-namespace>
subjects:
- kind: ServiceAccount
name: <your-service-account>
namespace: <your-namespace>
roleRef:
kind: Role
name: lilt-argo-workflows-workflow-controller-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bind-lilt-argo-workflows-namespace-access
subjects:
- kind: ServiceAccount
name: <your-service-account>
namespace: <your-namespace>
roleRef:
kind: ClusterRole
name: lilt-argo-workflows-namespace-access
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bind-lilt-argo-workflows-clusterworkflowtemplates-access
subjects:
- kind: ServiceAccount
name: <your-service-account>
namespace: <your-namespace>
roleRef:
kind: ClusterRole
name: lilt-argo-workflows-clusterworkflowtemplates-access
apiGroup: rbac.authorization.k8s.io
Update Values
First, collect the following information for your bucket:Copy
Ask AI
AWS_BUCKET: <AWS_BUCKET>
AWS_REGION: <AWS_REGION>
AWS_S3_ENDPOINT_URL: <AWS_S3_ENDPOINT_URL>
AWS_S3_ENDPOINT_URL_SSL: <AWS_S3_ENDPOINT_URL_SSL>
STORAGETYPE: s3
Copy
Ask AI
AWS_S3_ENDPOINT_URL: http://s3.us-east-1.amazonaws.com
AWS_S3_ENDPOINT_URL_SSL: https://s3.us-east-1.amazonaws.com
Remember to change the region in the example URLs (e.g., us-east-1) if your S3 bucket is in a different AWS region.
Service Account:
Copy
Ask AI
<SERVICE_ACCOUNT>: <service-account-name>
lilt/environments/lilt/values.yaml.
Add all of the following values, replacing values within <> with the values documented from above:
Copy
Ask AI
AWS_BUCKET: &AWS_BUCKET <AWS_BUCKET>
AWS_REGION: &AWS_REGION <AWS_REGION>
AWS_S3_ENDPOINT_URL: &AWS_S3_ENDPOINT_URL <AWS_S3_ENDPOINT_URL>
AWS_S3_ENDPOINT_URL_SSL: &AWS_S3_ENDPOINT_URL_SSL <AWS_S3_ENDPOINT_URL_SSL>
STORAGETYPE: &STORAGETYPE s3
SERVICE_ACCOUNT: &SERVICE_ACCOUNT <SERVICE_ACCOUNT>
# Compound strings, please replace <AWS_BUCKET> with the bucket name and keep the rest of the text
S3_WPA_PATH: &S3_WPA_PATH "s3://<AWS_BUCKET>/wpa/"
S3_TRAINED_DATA_PATH: &S3_TRAINED_DATA_PATH "s3://<AWS_BUCKET>/trained/"
front:
enabled: true
env:
MM_BUCKET: *AWS_BUCKET
UPLOADER_EVENT_SOURCE_TYPE: *STORAGETYPE
UPLOADER_STORAGETYPE: *STORAGETYPE
config:
front_configmap_values:
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
awsS3EndpointUrlPublic: *AWS_S3_ENDPOINT_URL
awsS3Bucket: *AWS_BUCKET
s3accesskey: ""
front_secret_values:
awsSecretAccessKey: ""
s3secretkey: ""
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
converter:
enabled: true
onpremValues:
config:
app:
bucketname: *AWS_BUCKET
args:
s3SslVerificationMode: NONE
storagetype: *STORAGETYPE
bucket: *AWS_BUCKET
s3-endpoint: *AWS_S3_ENDPOINT_URL
s3-region: *AWS_REGION
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
qa:
enabled: true
onpremValues:
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
linguist:
enabled: true
onpremValues:
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
search:
enabled: true
onpremValues:
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
tm:
enabled: true
onpremValues:
app:
bucketname: *AWS_BUCKET
args:
storagetype: *STORAGETYPE
bucket: *AWS_BUCKET
s3-endpoint: *AWS_S3_ENDPOINT_URL
s3-region: *AWS_REGION
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
tb:
enabled: true
onpremValues:
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
batch-tb:
enabled: true
onpremValues:
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
indexer:
enabled: true
onpremValues:
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
lexicon:
enabled: true
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsRegion: *AWS_REGION
app:
bucketname: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
segment:
enabled: true
onpremValues:
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
file-translation:
enabled: true
onpremValues:
app:
bucketname: *AWS_BUCKET
args:
s3-accesskey: ""
s3-privatekey: ""
storagetype: *STORAGETYPE
bucket: *AWS_BUCKET
s3-endpoint: *AWS_S3_ENDPOINT_URL
s3-region: *AWS_REGION
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
job:
enabled: true
onpremValues:
env:
BUCKETNAME: *AWS_BUCKET
app:
bucketname: *AWS_BUCKET
args:
storagetype: *STORAGETYPE
bucket: *AWS_BUCKET
s3-endpoint: *AWS_S3_ENDPOINT_URL
s3-region: *AWS_REGION
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
tag:
enabled: true
onpremValues:
env:
BUCKETNAME: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
auditlog:
enabled: true
onpremValues:
env:
BUCKETNAME: *AWS_BUCKET
assignment:
enabled: true
onpremValues:
env:
BUCKETNAME: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
workflow:
enabled: true
onpremValues:
env:
BUCKETNAME: *AWS_BUCKET
app:
bucketname: *AWS_BUCKET
args:
storagetype: *STORAGETYPE
bucket: *AWS_BUCKET
s3-endpoint: *AWS_S3_ENDPOINT_URL
s3-region: *AWS_REGION
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
dataflow:
enabled: true
onpremValues:
env:
MINIO_ENDPOINT: *AWS_S3_ENDPOINT_URL
MINIO_ACCESS_KEY: ""
MINIO_SECRET_KEY: ""
MINIO_REGION: *AWS_REGION
MINIO_SSL_CERT_FILE: ""
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
file-job:
enabled: true
onpremValues:
config:
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsRegion: *AWS_REGION
awsSecretAccessKey: ""
awsAccessKeyId: ""
env:
BUCKETNAME: *AWS_BUCKET
app:
bucketname: *AWS_BUCKET
args:
storagetype: *STORAGETYPE
bucket: *AWS_BUCKET
s3-endpoint: *AWS_S3_ENDPOINT_URL
s3-region: *AWS_REGION
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
extraInitContainer:
BUCKETNAME: *AWS_BUCKET
memory:
enabled: true
onpremValues:
env:
BUCKETNAME: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
notification:
enabled: true
onpremValues:
env:
BUCKETNAME: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
translatev4:
enabled: true
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
caBundlePath: ""
storageType: *STORAGETYPE
btcJobBucket: *AWS_BUCKET
bucket: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
# Init container to copy neural trained v4 models
init:
enabled: true
outputPath: *S3_TRAINED_DATA_PATH
updatev4:
enabled: true
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
caBundlePath: ""
storageType: *STORAGETYPE
btcJobBucket: *AWS_BUCKET
bucket: *AWS_BUCKET
wpaDataStoreBasePath: *S3_WPA_PATH
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
langid:
enabled: true
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
caBundlePath: ""
storageType: *STORAGETYPE
bucket: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
update-managerv4:
enabled: true
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
caBundlePath: ""
storageType: *STORAGETYPE
bucket: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
routing:
enabled: true
# All messages from other teams' services go through the routing service.
# Increase the number of replicas in environments with high load.
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
caBundlePath: ""
storageType: *STORAGETYPE
bucket: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
batchv4:
enabled: true
onpremValues:
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
storageType: *STORAGETYPE
bucket: *AWS_BUCKET
batch-worker-gpuv4:
enabled: true
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
caBundlePath: ""
storageType: *STORAGETYPE
bucket: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
llm-inference:
enabled: true
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
caBundlePath: ""
storageType: *STORAGETYPE
bucket: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
init:
outputPath: *S3_TRAINED_DATA_PATH
automqm:
enabled: true
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
caBundlePath: ""
storageType: *STORAGETYPE
bucket: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
alignment:
enabled: true
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
caBundlePath: ""
storageType: *STORAGETYPE
bucket: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
init:
outputPath: *S3_TRAINED_DATA_PATH
tag-projection:
enabled: true
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
caBundlePath: ""
storageType: *STORAGETYPE
bucket: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT
init:
outputPath: *S3_TRAINED_DATA_PATH
nncache:
enabled: true
onpremValues:
config:
awsSecretAccessKey: ""
awsAccessKeyId: ""
awsRegion: *AWS_REGION
awsS3EndpointUrl: *AWS_S3_ENDPOINT_URL
awsS3EndpointUrlSsl: *AWS_S3_ENDPOINT_URL_SSL
caBundlePath: ""
storageType: *STORAGETYPE
bucket: *AWS_BUCKET
serviceAccount:
enabled: true
name: *SERVICE_ACCOUNT

