Lilt’s machine translation (MT) is both interactive and adaptive.
Lilt has two NMT models — a baseline model, and an adaptive model. The baseline model offers one-off training; in this model, each language pair has its own baseline model. The second, adaptive model, offers real-time training and an adaptive model for each individual translation project. The adaptive model is based on one baseline model. Lilt typically trains its baseline model once a year and each adaptive model trains in real-time. It trains the adaptive model once every 10 segments or so is confirmed.
Interactivity refers to the process of integrating Machine Translation suggestions into the translation workflow. When translating with Lilt, the MT system intelligently suggests translations of the source segment and provides hotkeys to merge and edit those translations to produce the final, human translation. When the translator confirms the segment, the system understands the source/target pair as correct, and an example worth learning from for future translation suggestions.
Adaptation refers to the ability of Lilt’s translation models to change in response to confirmed segments. The MT learns every time a user confirms a segment during translation. In essence, the translator teaches the MT system to translate in a specific way — that is, to adapt to the style, grammar, and word choice that the translator uses.
Adaptation speed differs across Memories and is statistical, but deterministic. In some cases, adaptation will pick up on updated terminology in the very next segment. In other cases, it may take significantly longer. The context of the segment is also important when referring to adaptation speed. Although the next segment might adapt to a particular case, further down in the document it might fall back to the original MT suggestion since the source sentence is further away from the first occurrence.