This article is more than 1 year old

Can Amazon's AI really detect fear? Plus: Fresh deepfake video freaks everyone out again

Nvidia is pleased with its latest numbers, and more

Roundup Our weekly AI roundup is back from a little summer break, and once again covering bits and pieces from the world of machine learning beyond what's already been reported by Team Register.

Eight-billion parameter model trained in under an hour: Nvidia claims it has managed to train the largest-known language model, with 8.3 billion parameters, in just 53 minutes.

The system is based on BERT [PDF], which was built by engineers at Google AI. It uses a transformer architecture that learns how to perform a range of language tasks, such as answering questions, and generating text using an encoder to read input text and decoder to predict what words should come next given the previous sentence.

The Google model was already pretty hefty. A smaller version, dubbed BERT base, contained 110 million parameters, and required 16 Cloud TPUs for training, and a larger option, known as BERT large, contained 340 million parameters, and required 64 TPU chips to train it over four days. Now, Nvidia tells us it has gone a step further to build an even larger BERT model that’s about 25 times bigger yet can be trained in just 53 minutes.

All that speed obviously requires heaps of hardware: 1,472 Nvidia V100 GPUs to be exact. The giant model is also fast during the inference stage for some tasks too, thanks to “highly optimized CPU code,” according to Nv. It can spit out answers when tested on a the BERT-Base SQuaD dataset in 2.2 milliseconds. You can read about it in more detail here.

Also, here’s Nvidia’s latest Q2 financial results: At the end of last week, the GPU giant reported losses in its second quarter of its fiscal 2020. Here are the numbers for those three months to July 28:

  • Revenue of $2.58bn, down 17 per cent from the previous year, though better than Wall St's expectations by $30m. Gaming made up $1.31bn of that, down 27 per cent year-on-year though about what analysts expected; and data center contributed $655m, down 14 per cent, and slightly lower than expectations.
  • Net income of $552m, a decrease of 50 per cent year-on-year.
  • GAAP diluted earnings per share of $0.90, down 49 per cent from a year ago, but beating expectations by seven cents.

All the numbers are downhill except for operating expenses if you compare them to the previous year, though all pointing up compared to Q1 fiscal 2020. “We achieved sequential growth across our platforms,” said CEO Jensen Huang. Shares went up roughly seven per cent due to the biz's Q3 outlook being in line with Wall St's expectations.

In brief... Machine-learning algorithms that detect hate speech online are biased against black people, two studies have shown... A fired Johns Hopkins professor isn't going to join Facebook after all to work on speech-recognition tech...

And the NYT has a behind-the-scenes look at the armies of humans training today's AI systems.

Uh oh, Amazon reckons its facial recog tech can identify fear: Amazon’s controversial product Rekognition can now, apparently, identify fear in people’s faces.

The facial recognition system can analyze snaps of fizogs to predict things like a person’s gender, age, and emotions. It can supposedly tell when someone appears happy, sad, angry, surprised, disgusted, calm, and confused, and now if they happen to look scared, too.

“We have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear’,” it quietly announced a few days ago.

The software has already come under fire for being biased against women and people of color, making it inappropriate to use in law enforcement and surveillance. Adding fear sensing to its repertoire just makes it that much more creepy. Why exactly do you want to detect when someone’s scared, anyway?

A study, published last month in Psychological Science in the Public Interest, concluded that emotions cannot be automatically inferred from people’s facial motions. Psychologists and computer scientists believe emotions aren’t expressed the same way by different folks, so AI-based emotion detection is unlikely to work as advertised.

Bill Hader + Tom Cruise deepfake mashup: Another deepfake video has been making the rounds on social media this month, and generated outcry from the mainstream media. It’s an AI-doctored vid of American comedian Bill Hader’s face seamlessly switching with Tom Cruise’s face mid-impression during a TV talk show interview. Watch it below, and pay close attention around 54 seconds into the video. It’s fairly impressive and chilling what can be done with straight-forward machine-learning code right now.

Youtube Video

If you've been following The Register, none of this will be new to you. We wrote about these videos, generated by YouTuber Ctrl-Shift-Face, back in May this year. ®

More about

TIP US OFF

Send us news


Other stories you might like