AI Review
LILT’s AI Review Agent is an independent AI entity that acts as a dedicated editor, providing a crucial second layer of scrutiny. It proactively corrects errors in AI and human translations, ensuring near-perfect accuracy before final human review. This improves AI-only quality of AI only translations, reduces human fallibility, and delivers superior quality, consistently.
AI Review mechanism
Segments are automatically revised based on identified errors through LILT’s proprietary AI Review scoring mechanism. These revisions are then shown to linguists when one of the AI Review workflows has been selected: Instant > AI Review > Review and Instant > AI Review > Review > Secondary Review. In these workflows:
AI Review will take anything in the style guide into account when looking for and correcting errors. Style guides are applied to AI Review, which will apply style guides more consistently than humans can and serves as an extra set of review eyes, improving content quality.
AI Review only applies to segments that have Critical, Major, or Minor errors. AI Review doesn’t apply to segments that have “No error” severity status.
Exact matches are excluded. The settings for “Accept and lock exact matches” or “Auto-confirm exact matches” would dictate whether they appear locked in the Review stage.
How to apply AI Review
AI Review is a pre-defined workflow customers can select from with two available options: Instant > AI Review > Review and Instant > AI Review > Review > Secondary Review. This means that AI Review can be applied as the default workflow at the org level or can be applied as the default workflow at a domain level.
The AI Review analysis will trigger right after the Instant Translate step is complete. The UI will indicate in the projects page a new workflow state that shows AI Review is running, and it will prevent users from translating/reviewing segments during this time. Any user will be able to see this workflow state running if they have access to this page.
These workflows can also be optimized with AI, where segments with no errors come through as confirmed and accepted at the beginning of the Review stage. Linguists can change the segments as needed as the segments are not locked.

Workflow and optimize with AI selectors during the job creation process.
Model learning with an AI Review workflow
In these workflows, all segments are reviewed and accepted by a reviewer, and are therefore saved into the selected model, continuing LILT’s fine-tuning of customers’ custom models.
Supported languages
AI Review is available in the following languages. LILT will continue to expand upon this set of languages in the future.
Source language | Target language |
---|---|
English | German, French, Spanish, Japanese, Korean, Portuguese, Chinese (Simplified), Chinese (Traditional), Polish, Vietnamese, Turkish, Norwegian, Finnish, Swahili, Russian, Swedish, Italian, Ukrainian, Indonesian, Dutch, Danish, Hindi, Thai, Malay, Arabic, Romanian, Czech, Hungarian |
German | English |
French | English |
Spanish | English |
Linguist interface
In the new Instant > AI Review > Review workflows, linguists will have visibility into which segments had errors that were corrected with AI Review, and they will optionally be able to reject the changes made.
Viewing which segments have identified errors
Any AI Review identified errors will appear in a new segment flag called “AI”. The color of the flag reflects severity of the identified and corrected error. The correction is shown noted the same way as changes are in revision reports. Removed text is struck through in red, with added text in green.

AI Review linguist interface
Rejecting changes made by AI Review
Linguists have the ability to reject the changes made by AI Review. Rejecting the AI correction reverts the target text to what it was before the AI Review correction was applied.
If the AI correction is rejected, the user can select a category for the rejection and add an optional comment.
Once a reason is added by the user for the rejection, the reason displays in the popover along with the option to edit the category and comment.

Rejecting an AI Review correction
Revision reports
Revision reports will show the results of AI Review alongside any reviewer corrections. If AI Review corrects the Instant translation, the “LILT AI Reviewer” will leave the comment of what has been changed. It will also display the category and severity, the same that’s displayed in the Review stage as they’re reviewing.

If the reviewer makes any changes outside of what was automatically changed by AI Review, these will display in revision reports as usual, with the MQM error category, severity, and comment.
Users in an Admin or Manager role will be able to see the AI Review error rate % in the downloaded Revision Report. The AI Review error rate % calculates the error rate % over the 80% of segments that were stackranked through AI Review.