This article is more than 1 year old
Europe needs more dosh for AI, Google's TPU2 vs Nvidia's Tesla V100, and more
All we need now is a robo-news-reader (quick, bring back Ananova)
Roundup Here's your roundup of machine-learning news from this week, beyond what we've already covered.
Axon AI Ethics board A group of civil rights groups and technology researchers has written a letter to Axon, a company that uses AI to analyze video footage aimed at law enforcement.
Axon recently announced it had set up an AI ethics board to guide its products and services. In response, the letter urges the company to not develop real-time facial recognition for police body cameras to prevent misidentifying civilians as criminals, to ethically reviewing all its other products, and to reach out to “survivors of law enforcement harm and violence” for advice.
You can read the letter here.
Brain in Japan Google Brain is hiring in Japan for the first time.
The ad-giant is looking to recruit research scientists in Tokyo with a PhD in computer science or any field related to machine learning and deep learning, as well as programming experience with C, C++, or Python.
You can have a crack at applying here.
TPU2 vs. Tesla Volta RiseML, a startup focused on scaling up machine learning workloads on Kubernetes clusters, has run some simple experiments comparing Google’s TPU2 and Nvidia’s Tesla V100 chips.
In a blog post, Elmar Haussmann, co-founder and CTO at RiseML, describes training a ResNet-50 model on ImageNet with four TPU2s and four V100s with a total memory of 64GB each, using the same batch size for each trial run.
“For the V100 experiments, we used a p3.8xlarge instance (Xeon E5–2686@2.30GHz 16 cores, 244 GB memory, Ubuntu 16.04) on AWS with four V100 GPUs (16 GB of memory each). For the TPU experiments, we used a small n1-standard-4 instance as host (Xeon@2.3GHz two cores, 15 GB memory, Debian 9) for which we provisioned a Cloud TPU (v2–8) consisting of four TPUv2 chips (16 GB of memory each).”
The chips are tested for speed – how quickly they can crunch numbers to train neural networks and how many images per second they can process.
The results show that the performances for both chips are pretty similar, with the TPU2 doing slightly better with larger batch sizes. At a batch size of 128, the V100s are faster, but at sizes 256, 512, 1024, the TPU2s are quicker. As the batch size increases, the gap between TPU2 and V100 narrows. At 1024, the TPU2 is slightly ahead at a difference of around 2 per cent.
A quick glance at pricing also reveals that renting a cloud TPU (4 TPU2s) is cheaper than using the V100s on AWS and AWS reserved instances. The price can go lower, however, if you choose the option of AWS spot instances. It cost about $73 (~£53) to train ResNet-50 on ImageNet to about 76.4 per cent.
“In terms of raw performance on ResNet-50, four TPUv2 chips (one Cloud TPU) and four V100 GPUs are equally fast (within 2 per cent of each other) in our benchmarks. We will likely see further optimizations in software (e.g., TensorFlow or CUDA) that improve performance and change this.”
Hardware companies are notoriously fluffy about benchmarks, so it’s helpful to see external groups trying to perform independent comparisons.
Brookings AI report Here’s another AI report. This time it’s written by Brookings, a US think tank.
It contains the usual spiel of how AI is already impacting finance, healthcare, transportation, and so on and so forth, and the need to address problems of algorithmic bias, transparency, data access and security, and legal liability.
The report recommends nine steps to maximize the benefits of AI:
- Encourage greater data access for researchers without compromising users’ personal privacy.
- Invest more government funding in unclassified AI research.
- Promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy.
- Create a federal AI advisory committee to make policy recommendations.
- Engage with state and local officials so they enact effective policies.
- Regulate broad AI principles rather than specific algorithms.
- Take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms.
- Maintain mechanisms for human oversight and control, and penalize malicious AI behavior and promote cybersecurity.
- The report is also a pretty good read to learn more about interesting cases such as the Chinese “Sharp Eyes” program, where law enforcement use machine learning for surveillance and identifying criminals.
Read it in full here.
New version of Pytorch Pytorch 0.4.0 has been released. The updated framework includes bug fixes and has been made easier for building and deploying neural networks, and for doing distributed training.
Download it here.
European AI lab A bunch of AI experts have outlined ideas for an European research AI hub to attract and retain top researchers in a public letter.
The European Lab for Learning & Intelligent Systems, dubbed ELLIS, would benefit Europe by creating new jobs and conducting top quality research to make sure Europe doesn’t fall behind the US and China in shaping how AI changes the world.
In their letter, they said that AI investments were larger in North America and China. There are more academic positions there that offer higher salaries compared to the UK.
“There is no shortage of funding for AI research, but it is extremely hard to attract outstanding researchers. However, it is the quality of the individual researchers that determines the strength of the overall lab, and only top people act as true talent magnets. US institutions and companies have recognized that money spent on those people pays off in multiple ways,” the letter says.
To remain competitive, the letter concludes, ELLIS should create new PhD and postgraduate programs, offer workshops and summer schools for students and visiting researchers, allow ELLIS researchers to split their time between the lab and industry, and also support them launching their own startups.
You can read the letter in full here. ®