Skip to main content

How AI QA works

  • AI QA is an in-workflow step for linguists and reviewers that integrates with LILT’s existing UI for segment level QA, batch QA, and subtitling QA checks.
  • AI QA runs automatically when segments are confirmed in Translate stage or accepted in Review stage.
  • If any QA errors are found, linguists will have the ability to accept or ignore suggested corrections per segment.
  • AI QA is available for all language pairs and will take <5 seconds to run on individual segments.

AI QA Categories & Sub-categories

CategorySub-categoryDefinition
Accuracy & Completeness - Checks whether meaning, scope, and content are faithfully transferred.MistranslationMeaning deviates from source
AdditionExtra information added
OmissionMissing information
UntranslatedSource text left untranslated
Scope or Plurality shiftChange in meaning breadth (e.g. singular <> plural)
Length deviationTarget too short or too long compared to source
Linguistic Conventions & Fluency - Ensures grammatical correctness and language quality.GrammarIncorrect syntax or structure
Number/Plural alignmentMismatch between singular/plural or subject-verb
SpellingObvious typos or spelling mistakes
PunctuationErrors affecting clarity or meaning
Repeated wordsAccidental duplication
Wrong languageText not in target language
Locale conventionsIncorrect local formats (dates, currencies, numbers)
Style & Readability - Assesses tone, register, and fluency of expressionAwkward/UnidiomaticUnnatural or clumsy phrasing
Tone/Register mismatchInappropriate style or tone for the domain or context
Technical & Formatting - Validates structural integrity and formatting consistencyTag integrityBroken or mismatched tags
Numeric mismatchInconsistent numbers or formats
Alphanumeric mismatchIncorrect codes, IDs, or mixed data
Symbol mismatchUnbalanced punctuation (quotes, brackets, etc.)
Case mismatchIncorrect capitalization
WhitespaceExtra or missing spaces affecting structure or readability
Terminology & Consistency - Ensures consistent and correct use of termsTerm plausibility errorIncorrect or implausible term translation
Brand name handlingImproper treatment of brand names
DNT violationTranslated “Do Not Translate” items (e.g., “Wi-Fi”, URLs)
Forbidden TermsUse of explicitly banned or deprecated terms

AI QA Interface - Segment QA

  1. In all workflow stages, there is now a “QA” icon that is visible next to the segment. Once the segment is confirmed or accepted, AI QA will run on the segment.
  2. Once AI QA is finished running, the QA icon will indicate whether there is a QA issue or not. If there are none, a green check mark displays. If there is 1+ issues, then a red number displays.
  3. The CAT sidebar has a new QA tab at the bottom that details the QA issues found, with an option to accept or ignore the suggested changes. You can rerun AI QA using the double arrow icon. AIQACAT Web

AI QA Interface - Batch QA

  1. Batch QA contains a new full-page view that groups errors together based on AI QA error category. A red number will display the number of same errors found.
  2. You can then Accept or Ignore the suggestions displayed underneath each segment, or Accept or Ignore all found within the category. Alternatively, you can edit the translation if you don’t like the suggested change, including moving or adjusting tags.
    • _Note: There is only one suggested correction per segment. If a segmet has multiple errors found, accepting or ignoring the suggestion will do so for all errors. _
  3. Clicking on “Back to segments view” will take you back to the standard CAT editor view.
Batch QA Web

AI QA after AI Review

With the release of AI QA, LILT is changing how AI Review displays in the CAT editor. All functionality will still be available, but where it displays in CAT is changing. This will released as part of a phased rollout.
  • AI Review results will now appear in the CAT sidebar. You will still have the ability to reject the AI Review correction. image.png
  • Once a segment is confirmed or accepted, AI QA runs. The results of AI QA will replace the results of AI Review, and will offer the ability to accept/ignore the QA suggestions. image.png

Custom AI QA checks

Admins and managers can define additional AI QA checks to run based on natural language input within LILT org settings. When QA is run during a workflow, AI QA will also run through the custom AI QA check and find relevant errors. Please note that this is an experimental feature in LILT, and may not catch errors. If this happens, try rewriting the prompt to be more specific, or include an example. Screenshot2025 10 15at12 29 29 Pn

Exact matches

There is a setting for whether AI QA should run on exact (100/101%) matches. If you would like AI QA to run on exact matches, then an Admin or Manager will need to change the setting within LILT org settings.
  • If exact matches are locked for the org, in order to accept any proposed suggestions on exact matches, the setting for Unlock Locked Segments must be turned on.

FAQs

  1. Is AI QA replacing human review? No. AI QA reduces manual effort and catches issues early, but human judgment remains the final gate for high-stakes content.
  2. Does AI QA enforce terminology? The translation model may flex on terms in context. AI QA provides a stricter post-editing layer that detects and corrects terminology mismatches for compliance.
  3. Can I use AI QA with third‑party models? Yes. LILT supports model choice and evaluation across providers, and AI QA can operate in workflows that include third‑party engines.
  4. My custom AI QA check didn’t detect an error. Custom AI QA checks is an experimental feature. If your check didn’t find a specific error, try rewriting the prompt to be more specific, or include an example.
  5. Does AI QA work with settings for Automatic Batch QA and Required Batch QA? Yes, if Automatic Batch QA or Required Batch QA are enabled for the org, then those will trigger as usual.