Lilt’s machine translation (MT) is both interactive and adaptive.
We have two NMT models, one is real time training - the adaptive model. Each translation project has its own adaptive model. The other is a one off training - the baseline model. Each language pair has its own baseline model. The adaptive model is based on one baseline model. Lilt usually trains its baseline model once a year, but our adaptive model trains in real time. From the translator and the customer’s point of view, the training happens in real time. It trains the adaptive model once every 10 segments or so is confirmed.
Interactivity refers to the process of integrating Machine Translation suggestions into the translation workflow. When translating with Lilt, the MT system intelligently suggests translations of the source segment and provides hotkeys to merge and edit those translations to produce the final, human translation. When the translator confirms the segment, the system understands the source/target pair as correct, and an example worth learning from.
Adaptation refers to the ability of Lilt’s translation models to change in response to confirmed segments. The MT learns every time a user confirms a segment during translation. In essence, the translator teaches the MT system to translate in a specific way -- that is, to adapt to the style, grammar, and word choice that the translator uses.
Adaptation speed differs across memories, and is statistical, but deterministic. In some cases, adaptation will pick up on updated terminology in the very next segment. In other cases, it may take significantly longer. The context of the segment is also important when referring to adaptation speed. Although the next segment might adapt to a particular case, further down in the document it might fall back to the original MT suggestion, since the source sentence is further away from the first occurence.