Researchers at the Amazon Cloud Computing Artificial Intelligence Laboratory recently discovered that there is a large amount of content generated by machine translation on the Internet, and the quality of these translations across multiple languages is generally poor. Low. The research team emphasized the importance of data quality and provenance when training large language models. This finding highlights the need to pay more attention to data quality and source selection when building high-quality language models.
The study also found that machine-generated content is prevalent in translations from languages with fewer resources and makes up a large portion of web content.
This site noticed that the research team developed a huge resource called MWccMatrix to better understand the characteristics of machine translation content. The resource contains 6.4 billion unique sentences, covering 90 languages, and provides combinations of sentences that translate to each other, known as translation tuples.
This study found that a large amount of online content is translated into multiple languages, often through machine translation. This phenomenon is prevalent in translations from languages with fewer resources and accounts for a large portion of web content in these languages.
The researchers also noted a selectivity bias in content that is translated into multiple languages for purposes such as advertising revenue.
Based on my research, I came to the following conclusion: "Machine translation technology has made significant progress in the past decade, but it still cannot reach human quality levels. Over the past many years, people have used what was available at the time Machine translation systems add content to the web, so much of the machine-translated content on the web is likely to be of relatively low quality and fails to meet modern standards. This may lead to more 'hallucinations' in the LLM model, while selection bias suggests that even if not Considering machine translation errors, data quality may also be lower. For the training of LLM, data quality is crucial, and high-quality corpora, such as books and Wikipedia articles, usually require multiple upsampling."
The above is the detailed content of Research: The Internet is full of low-quality machine-translated content, and large language model training needs to be wary of data traps. For more information, please follow other related articles on the PHP Chinese website!