Skip to main content
Skip table of contents

3rd-party LLM Support for the Contextual AI Engine

In this article, we describe how to set-up alternate Large Language Models (LLM) providers within Lilt. This article is meant for anyone interested in using and managing their own LLMs within the Lilt platform.

Administrator role or equivalent API Access role permission is required to configure LLMs in Lilt.

Configuring and Editing LLM Integrations and Settings

Manage LLMs

Lilt enables users to set-up and manage LLMs all within the Lilt platform. To set-up and manage an LLM:

  1. Use the sidebar to navigate to “Connect” then select “LLMs”

  2. From the desired LLM card, select “Configure”

  3. Enter the required configurations for your LLM (e.g. API key or Password).

  4. Select from the set of options: “Enable for your organization”, “Use Terms”, and “Allow fine-tuning”. NOTE: Some options may be marked as “[not supported]” to indicate that functionality is not possible within the selected LLM.

    1. Enable for organization = allows any Lilt user in your organization to view Models built from this LLM within the Models page and use the model when creating a job or Instant Translate.

    2. Use Terms = allows selected terminology entries from selected Lilt Data Sources to be used for fine tuning Models built from this LLM.

    3. Allow fine-turning = allows Lilt to share memory entries from selected Lilt Data Sources to create more contextualized models.

  5. Click save

Edit LLM Credentials and Settings

Once an LLM is set-up, you can edit the API key, Password, and settings by doing the following:

  1. Use the sidebar to navigate to “Connect” then select “LLMs”

  2. From the desired LLM, select “Edit”

  3. Edit the API Key or Password, and toggle “Enable for your organization”, “Use Terms”, “Allow fine-tuning” as needed, and Save.

Managing Models within Lilt

Once LLMs are configured, you can create and update Models from these third party LLM providers across supported language pairs in Lilt’s Model Builder.

Setting up a Model

To get started, navigate to the Contextual AI tab. Inside the Contextual AI > Models tab you'll see a list of models

To create a new model:

  1. Using the sidebar navigate into “Contextual AI” then select “Models”.

  2. Once on the Models page, select “+ New Model”.

    1. Create a name for your model, note this name will carry over onto the third party system as well.

    2. Enter source and target languages

    3. Select a reference Data Source. Memory and Termbase entries from this Data Source will be sent to your 3rd-party LLM provider to fine-tune your Model.

    4. Select from enabled LLM providers.

    5. Click the Create.

Retraining an existing Model

Lilt stores your linguistic assets in Data Sources, and adds entries with each sentence translated during Verified Translation projects. If at any time, you want to use that data to retrain and customize your 3rd-party models, simply navigate to the models page and select “Retrain”. This action will prompt you with a confirmation modal showing the LLM and time since last training. If you wish to update the model with the new entries from your Data Source, confirm by clicking “Retrain”.

Please note that Lilt Contextual AI models are continuously trained in real time, so there is no need to retrain manually through this process!

Available LLM Integrations Within Lilt

Lilt currently offers four LLMs:

  • Lilt Contextual AI

  • Amazon Translate

  • Google Translate

  • DeepL

Lilt, Amazon, and Google all offer fine tuning of their LLMs with parallel data. As your Data Sources within Lilt grow, you can easily retrain your Models from these LLMs through Lilt’s Model Builder interface on the Models page.

Please note that DeepL does not currently offer fine tuning, so all models created will use the baseline LLM.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.