Roundup Hello, here’s a very quick roundup of some of the interesting AI announcements from this week. Read on if you like robots and GPUs.
Nvidia robotics lab opens: Nvidia’s AI robotics lab has officially opened its doors after CEO Jensen Huang paid a visit to its Seattle office.
The lab was announced back in November last year with an initial staff of 14 researchers, but is now hoping to expand to 32 by the middle of 2019. It will be led by Dieter Fox, who is currently on leave from his role as a professor of computer science at the University of Washington.
Researchers will focus on merging AI software with hardware to create robots for manufacturing and healthcare purposes. “We want to develop robots that can naturally perform tasks alongside people,” said Fox. “To do that, they need to be able to understand what a person wants to do and figure out how to help her achieve a goal.”
At the moment, their main project revolves around making a robot that is useful in the kitchen. Dubbed the “kitchen manipulator,” the robot will be expected to do tasks like retrieve objects like cutlery and plates from drawers and cupboards.
“All of this is working toward enabling the next generation of smart manipulators that can also operate in open-ended environments where not everything is designed specifically for them,” Fox added. “By pulling together recent advances in perception, control, learning and simulation, we can help the research community solve some of the greatest challenges in robotics.”
Rent T4 GPUs in Google Cloud now: Huzzah, commoners can now rent Nvidia’s Tesla T4 GPUs via Google's Cloud Platform.
The service was previously only available to private customers, but has now expanded for users living in Brazil, India, Netherlands, Singapore, Tokyo, and the United States.
It’s aimed at running inference workloads, so won’t be that great for the heavier lifting required to train neural networks. Each chip has 16GB of GPU memory and is available with a precision of FP32, FP16, INT8 and INT4, and can, apparently, perform up to 260 trillion operations per second.
The price will vary depending on the region, but the numbers being floated around for now is about $0.95 per hour for Google’s Compute Engine on-demand service. You can get it for cheaper at $0.29 per hour, if you’re okay with the possibility of being interrupted and kicked off by using its cheaper Preemptible VM instances.
You can read more about it here.
Don’t sell Rekognition to the US gov, say Amazon’s shareholders: Amazon has been touting its facial recognition technology to US officials, but even its shareholders have told the e-commerce giant to cut it out.
A statement organized by OpenMIC, a non-profit company that helps shareholders engage with businesses, shows that Amazon’s investors have demanded that Amazon stop selling Rekognition to the government.
They’re concerned that it’s building a surveillance system “readily available to violate rights and target communities of color.” They also pointed out that 450 Amazon employees had also expressed dismay at the prospect potential harm.
Amazon has been shifty about its involvement with the government so far. The statement said it had cloud computing contracts with the Immigration and Customs Enforcement, and is “reportedly marketing Rekognition to ICE”.
“Shareholders have little evidence our Company is effectively restricting the use of Rekognition to protect privacy and civil rights,” it said.
“Shareholders request that the Board of Directors prohibit sales of facial recognition technology to government agencies unless the Board concludes, after an evaluation using independent evidence, that the technology does not cause or contribute to actual or potential violations of civil and human rights.”
There are more details here. ®