Roundup Hello, here is a very quick roundup of this week's AI goodies you may have missed.
Floating point maths for AI chips: Facebook has published code that improves the efficiency of number crunching to train and deploy neural networks using AI chips.
There’s a ton of matrix maths that’s performed when you feed a neural network data to train it to perform a specific function, whether its a natural language processing or computer vision tasks. During the training process, all the numbers are encoded using floating point and then rounded up or down to a quantized figure so they can be deployed more quickly.
Developers at Facebook have crafted new techniques to make all the maths in Floating Point 16 more efficient so that they don’t have to be quantized for neural networks to be deployed.
They tested their methods using “a popular” 28-nanometer ASIC to train a ResNet-50 model on the ImageNet dataset and achieved only a tiny loss of accuracy compared to original models using 8 integer bits and Floating Point 32 floating point.
“We achieved 75.23 percent top-1 and 92.66 percent top-5 accuracy on the ImageNet validation set, a loss of 0.9 percent and 0.2 percent, respectively, from the original,” it said this week.
Using software tricks to save on compute will be future as hardware limitations begin to creep in.
You, too, can become an RL whiz: OpenAI has released a range of resources to help those interested in breaking into the world of reinforcement learning.
The guide, dubbed Spinning Up in Deep RL, is made up of a document introducing the basic theory around RL and the algorithms commonly used in research. There’s also another section with advice on how to become a researcher in this competitive field. If you get that far, there’s a library with the code describing algorithms and a list of exercises for you to try out.
“Spinning Up in Deep RL is part of a new education initiative at OpenAI which we’re ‘spinning up’ to ensure we fulfill one of the tenets of the OpenAI Charter: 'Seek to create a global community working together to address AGI’s global challenges,'" the research institute said this week.
"We hope Spinning Up will allow more people to become familiar with deep reinforcement learning, and use it to help advance safe and broadly beneficial AI."
And for those in San Francisco, OpenAI are hosting a workshop for those working through the Spinning Up in Deep RL next year in February.
You can apply for it here.
It’s all in the eyes: DeepMind are working with the Moorfields Eye Hospital in London, UK, to see if AI systems can predict the onset of eye diseases.
The partnership between both organisations has focused on diagnosis so far, when diseases such as age-related macular degeneration (AMD) have already begun. We have written about that in more detail here. But now, they want to see if it's possible to spot signs of problems before symptoms begin to occur.
Moorfields Eye Hospital will hand over a dataset containing eye scans from 7,000 patients. The data has been anonymised so that the patients can’t be identified - now that the NHS is becoming more serious about data protection.
DeepMind wants to see if it can predict wet AMD, a more dangerous form of the disease that can lead to permanent blindness. The goal is to look at the scans for patients with wet AMD in one eye, and see if AI systems can replicate the signs of deterioration in the other eye.
“Predicting potential indicators of disease is a much more complicated – and computationally intense – task than identifying existing known symptoms,” it said.
You can read about it in more details here. ®