Translation automation, machine translation, and AI-enhanced localization are just a couple of concepts and technologies that have been trending in the past couple of years. The need for machine-translated content is growing and technological advancements seem to be matching its pace.
However, with the development of translation technology, a question comes up. How much trust should I put into the chosen machine translation engine? How do I know what quality the translation result will be or how much post-editing may be needed?
Why Evaluate Machine Translation Quality?
When using machine translation, there are many choices to be made, such as selecting the translatable documents suitable for machine translation, how much you want to depend on the translation output, and what quality you want to achieve by applying MT to your given projects and documents.
And, of course, you must select the machine translation engine that fits your project perfectly, taking into consideration the language pair, the type of document you want to translate, and even the target industry for your content. With so many factors to consider and so many technologies to choose from, you must evaluate the desired machine translation engines to find out which one is (or which ones are) the best fit for your individual needs.
How to Evaluate Machine Translation?
There are two different approaches when it comes to evaluating machine translation engines.
Since the goal of any translation is to create text in the target language that is understood by humans, human evaluation is naturally one approach when it comes to assessing the quality of MT translation output.
However, human evaluation comes with its own challenges (such as the differences in scores given by different evaluators or even differences in scores given by the same evaluator when performing an assessment several times) and is not always the way to go, not to mention the time it takes to perform such evaluations. In these cases, automatic evaluation is the solution, performed by some kind of system or software.
memoQ's MT evaluation solution: AIQE
Since memoQ can be integrated with several machine translation engines, we want to ensure that memoQ users can use them to their full potential. That is why we rolled out AIQE (Artificial Intelligence-based Quality Estimate) with memoQ 10.1 via two separate integrations: TAUS, and ModelFront. AIQE was introduced to enable language service providers and enterprises to efficiently mitigate the risks surrounding the unreliable quality of machine translation and increase efficiency when using MT with new automation opportunities.
After the initial version earlier this year, memoQ 10.4 comes with several new functionalities to further help companies in evaluating machine translation engines, and make close estimations concerning the quality of the translation output provided by these engines.
This helps memoQ users select the appropriate machine translation technology tailored to their specific needs, thus making it easier to estimate the time and effort needed to post-edit the MT translation output. This can also reduce turnaround times, enhance translation quality, and save costs and the time associated with post-editing.
What’s New in AIQE?
memoQ 10.4 has arrived, and with it, some new functionalities are also here to help you further optimize your workflows and automate processes when using machine translation.
Auto selection of MT engines in pre-translate with AIQE
Until now, the matches during the pre-translation of a document could come from translation memories, LiveDocs, or a selected machine translation engine.
But, how do you select the right MT engine for your document/project/client? From now on, AIQE can choose the best translation from different engines during pre-translation. You can run a pre-translation with multiple engines and let AIQE pick the best translation segment-by-segment.
Pre-translate auto-confirm action with MT & AIQE
This is one of the most important features of AIQE. Thanks to the auto-confirm function, you can simplify and automate your translation workflow and skip unnecessary steps when using machine translation in memoQ. This is similar to how translations are handled through TMs since AIQE provides quality attributes similar to the match rate of the TM.
Implementing the AIQE into the auto-confirm functions of the pre-translate enables fully automated translations with MT & TM translations having the quality also assured.
Display & record AIQE as "match rate" in .XLIFF and on the translation grid
When working with AIQE within memoQ, match rates of machine translation had only been displayed in the translation results.
With the AIQE improvements in memoQ 10.4, match rates will also be displayed on the translation grid itself. The match rate reflects the accuracy of the translation according to AIQE, and it is going to be shown in parentheses, indicating that the match comes from a machine translation engine.
The match rate will also appear in the .XLIFF file associated with the project. This enables the user to see which MT engines the matches come from, and which one performed better in the given document. This feature is especially useful when evaluating machine translation engines.
Machine Translation Evaluation: Final Thoughts
When working with machine translation, it is crucial that you assess and evaluate before applying a machine translation engine to your project or document. This step can ensure that you have an idea (the more accurate, the better) about the quality of the translation output. This way you can ensure you allocate your sources accordingly, and the time needed to deliver a specific project can also be accurately estimated.
With the new functionality of AIQE introduced in memoQ 10.4, the evaluation process has just become easier. Download memoQ 10.4 and start working with AIQE to automate and enhance your workflows with machine translation!
Linguist turned content marketer, telling the story of memoQ.