You've been a Baidu boy! Tech giant caught cheating on AI tests
Looks like they may need to start developing some non-artificial integrity
Baidu has been shot in its liquid metal head for cheating in a standardised and independent Artificial Intelligence test.
Hosted by Stanford University's vision lab, the Large Scale Visual Recognition Challenge (ILSVRC) saw Baidu's algorithms compete alongside those from Google, Microsoft, Apple and Facebook's FART, among others.
The Register spoke to Dr Sean Holden, senior lecturer in machine learning at Cambridge University, about the relationship between image recognition and what is classically considered AI.
"Historically, image recognition has had a very big part in artificial intelligence research, and it has taken big steps forward recently," said Dr Holden. "If you want to take an image and work out what's in it, there's loads of relevant research tucked into that, whether it's finding edges or creating Bayesian 3D models, it's quite a huge field."
"The thing is, most AI research actually isn't really working on the entire field of AI," says Dr Holden, at least not in the sense of creating general intelligence. "Human-like performance," Holden explained, "is too hard, it's too difficult to focus on."
It is hoped a technological advantage in the efficiency of an image-recognition algorithm would eventually result in a commercial advantage, though companies are still largely hovering at an error rate of around five per cent.
Baidu had claimed to have scored a record-low error rate in the image recognition tests of 4.58 per cent. Google, in comparison, hit a 4.8 per cent error rate, while Microsoft achieved 4.94 per cent.
The ILSVRC organisers announced in May that one group had circumvented their testing policy by creating and using multiple accounts to run more evaluations every week than their competitors. An updated note identifies this group as Baidu.
A statement from Baidu's Heterogeneous Computing team states that "recently the ILSVRC organizers contacted [us] to inform us that we exceeded the allowable number of weekly submissions to the ImageNet servers (~ 200 submissions during the lifespan of our project)."
"We apologize for this mistake and are continuing to review the results. We have added a note to our research paper, Deep Image: Scaling up Image Recognition [arxiv], and will continue to provide relevant updates as we learn more."
The apology, which is directed at the ILSVRC community, acknowledges "the mistake" and claims they are "staunch supporters of fairness and transparency in the ImageNet Challenge and are committed to the integrity of the scientific process".
With the arena of image recognition to one side, Dr Holden explained developments in Skynet-style AI to us. "Basically, general intelligence is a very long way away. The more you learn about neuroscience, about how the human brain works – it's not happening in my lifetime. In a long enough timeline, yes, but I'm not holding my breath." ®