There has been lots of talk about the perfection of machine translation. It has advanced considerably since we looked at it for corporate tasks in the 80s. In particular now I see examples of where it is touted for real-time, unchecked interactions. This ACM article looks at a considerable test, where the results vary from 'very good to useless'. Translation does often depend, like all AI, strongly upon context. Risk of outcome should drive particular usage. Quality control essential.
Google Translate Does Not Understand the Content of the Texts
By Herbert Bruderer
" .... The examples show that the quality of machine translations vary between fairly good and useless. It depends, among other things, on the language pair, the subject area and the available data set and its quality. In Wikipedia, there are also large differences between the different language versions.
Further tests have shown that automatic translations are often inconsistent and sometimes even nonsensical. Sometimes even words are missing. The decisions are sometimes difficult to understand. In many cases the results are "good enough" for some applications. For non-native translators, the programs can be a valuable help. For language pairs with very large data sets, machine translation may achieve the quality of a mediocre human translator. The raw translations usually have to be edited manually. In the case of poor raw translations, the effort required for an improvement can be higher than for a manual retranslation .... "
Saturday, May 04, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment