This article discusses how Lilt’s Machine Translation (MT) models are both interactive and adaptive. Lilt has two types of MT models: a baseline model, and an adaptive model.
The baseline MT model is shared across all customers. Each language pair has its own baseline model. Lilt typically improves baseline models once a year using general purpose data. Baseline models do not adapt based on data from individual Projects.
When a new Project is created in Lilt, a unique adaptive MT model is also created and used exclusively for that Project. Adaptive models are trained in real-time every time a segment is confirmed in the Project.
Interactivity refers to the process of integrating MT suggestions into the translation workflow.
When translating in Lilt, the MT system intelligently suggests translations of the source segment that can be utilized during human translation.
Adaptation refers to the ability of Lilt’s translation models to change in response to confirmed segments.
When a translator confirms a segment, Lilt processes the source/target pair as correct and uses it as an example to learn from when providing future translation suggestions. In essence, the translator teaches the MT system to adapt translation suggestions to the style, grammar, and word choice of the translator.
Adaptation speed differs across Lilt Memories. In some cases, adaptation will pick up on updated terminology in the very next segment. In other cases, it may take significantly longer. The context of a segment is also important when referring to adaptation speed. For example, the translation of a particular segment may be utilized when the MT provides suggestions for nearby segments; however, the MT may fall back to its original suggestions for segments farther away in the document.