Overview
By default, LILT comes with MinIO enabled. However, if an environment has access to S3 directly, this is the recommended approach. This document covers configuring LILT to use S3 instead of MinIO—it does not cover how to transition an existing MinIO deployment to S3. Please work with your program manager to facilitate this process. There are two options for authentication with s3 depending on the type of cluster, managed or unmanaged:- Unmanaged: Identity and Access Management (IAM) that utilizes an AWS access and secret key
- Managed (EKS): IAM Roles for Service Accounts (IRSA) that attaches a service account to a pod
Uninstall MinIO
There is currently no mechanism in LILT to automatically remove MinIO from a deployment. This is a feature we are working to implement by the end of 2025. In the meantime, MinIO may be removed directly by:Configure LILT
Update Secrets
LILT ships with a default AWS credential. This needs to be removed. Editlilt/environments/lilt/secrets.yaml and remove the following line:
Option 1: Unmanaged IAM
Update Values
If you plan to use Cloud Upload (direct upload to S3) create an SQS queue
Ref: https://us-east-1.console.aws.amazon.com/sqs/v3/home?#/create-queueYou can keep all as default but attach a policy, you’ll need to modify it once
the S3 Bucket is created in the next step.
Sample Policy:
Create an S3 bucket in case you don’t have one
Ref: https://us-east-1.console.aws.amazon.com/s3/bucket/create Keep note of the name for later use.Attach the following CORS policy to the bucket
Properties, add an Event Notification.
Update SQS Access policy (if needed) to allow the S3 bucket to write on the SQS queue
Ref: https://us-east-1.console.aws.amazon.com/sqs/v3/home?#/queues Access your queue and click on Access policy Edit button. Add the following block to the statements:Create an IAM Policy:
Ref: https://us-east-1.console.aws.amazon.com/iam/home?#/policies/create Replace the fields<...> with the names you got in the previous stepsKeep note of policy name for later use.
Create an AWS user for the workload
Ref: https://us-east-1.console.aws.amazon.com/iam/home?#/users/create Keep note of the username you generate for later use.Attach the policy generated in the previous step
Collect the following information for your bucket:
Remember to change the region in the example URLs (e.g., us-east-1) if your S3 bucket is in a different AWS region.
Edit your override values at lilt/environments/lilt/values.yaml.
Add all the following values, replacing values within <> with the values documented from above:
Run the installer as usual
Enable the feature for the organization
Log in to the front pod and enable the Cloud Upload feature.This is a one-time enable per ORG_ID.
Default ORG_ID for a new installation is 1.
Option 2: Managed (EKS) IRSA
Create AWS IAM Policy and Role
In the AWS console or via terraform, create IAM role and policy for accessing the s3 bucket and then attach them to each other. The following assumes that you are using an OIDC provider.arn:aws:iam::<ACCOUNT_ID>:role/<role_name>
Create Service Account
Either via terraform or kubectl, create a service account for use by the pods.Example using kubectl:
Update RBAC
Pods can only reference ONE service account. This overrides any default cluster and pod permissions so any additional requirements must be included within the same RBAC roles.For example,
neural requires access to pods and deployments, dataflow requires access to argo, etc…
Create a role yaml and apply it to the cluster:
Update Values
First, collect the following information for your bucket:
Remember to change the region in the example URLs (e.g., us-east-1) if your S3 bucket is in a different AWS region.
Service Account:
lilt/environments/lilt/values.yaml.
Add all of the following values, replacing values within <> with the values documented from above:
Cloud Upload setup (front service)
The Cloud Upload feature is owned by thefront service and builds on top of the S3 bucket/credentials already defined above. The values below summarize the environment variables that must be provided so front can sign presigned URLs and advertise the s3 storage backend:
*AWS_BUCKET, *STORAGETYPE, etc.) that other services already consume so all clients target the same bucket/region. MM_BUCKET is the bucket where uploads land, and the UPLOADER_* variables tell front which storage type and endpoint to advertise in the signed URLs.
MM_BUCKET– the S3 bucket where uploads land and that matches theawsS3Bucket/bucketvalues used across the values.yaml.UPLOADER_EVENT_SOURCE_TYPEandUPLOADER_STORAGETYPE– both should mirror theSTORAGETYPEanchor (typicallys3) so the service advertises the correct storage backend in the presigned URLs.- The
frontpod’s IAM role / IRSA service account must have thes3:GetObject,s3:PutObject,s3:ListBucket, ands3:DeleteObjectpermissions on the bucket because it signs upload URLs.
dist-admin-cli command provided earlier:

