Google's ever-so-clever Google Translate service may be falling foul of a problem known to grizzled engineers across the globe: garbage in, garbage out.
The problem was discussed by Google's director of research, Peter Norvig at the Nasa Innovative Advanced Concepts conference at Stanford, California on Wednesday, in response to a question by an audience member.
Norvig admitted that Google was "aware" of a problem caused by some sites using Google's services to translate their body copy into another language to create a localized version of their site.
The problem with this cut-rate method (bare cupboards of out-of-work translators aside) is that if Google indexes this site, it may then factor the translation into the models it itself uses to train its own machine-translation engine.
This post-modern problem means that Google's machines may be training themselves on data generated by Google's machines, which means that rather than getting incrementally better with each new model, they just stagnate.
"It's not a big problem yet – it could get worse," Norvig said. "We mostly address it by judging the quality of a site. If you look good, we'll keep your examples; if you look sketchy we'll toss them out."
Google has already sought to make it difficult for spammers to pollute the web with poorly translated text by shutting down its Google Translate API. Norvig also disclosed an approach in which Google tried to fingerprint each translation through precise word and syntax choices that wouldn't be obvious to the reader, but would be obvious to Google's bots, but said the company had retired the scheme as it was not effective. ®