This article is more than 1 year old

Graphcore's AI chips may not be as powerful as Nvidia's GPUs, but may provide good bang for your buck

Plus: SoftBank halts Pepper the robot production

In brief The latest results by benchmarking consortium MLPerf, tracking the best chips for training the most popular neural networks, are out and a new player has entered the game: Graphcore.

Each MLPerf release is pretty standard. A sprawling spreadsheet records the time various systems take to train or run a particular machine-learning model; these numbers are submitted by hardware vendors.

Nvidia and Google pretty much always lead the pack so the latest results aren’t particularly surprising. What is different this year is that Graphcore has joined for the first time. It’s a good sign; it signals their technology is maturing and that it’s willing to publicly compare itself to competitors.

Although Graphcore’s IPU-PODs weren't as fast at training the computer vision model ResNet-50 and language model BERT as Nvidia’s A100 GPU or Google’s latest TPUs, the company’s hardware is much cheaper, so it may have a performance per dollar advantage. Google’s TPUs are only available via the cloud.

You can see the full results here, and more on Graphcore here from our sister site, The Next Platform.

Say bye-bye to Pepper, the robot

SoftBank has stopped producing its humanoid Pepper robot and is cutting jobs across its robotics unit. Pepper is instantly recognizable by its white body, complete with a head, two arms, torso, lower body on wheels and a screen. It’s about the size of a small child, and has two circular black eyes and a small smile on its face.

Launched in 2014, the machine was designed to do all sorts of tasks, such as greeting customers or showing useful information like menus or locations. But it hasn't been popular, and SoftBank has struggled to shift the 27,000 units it made. Now, it has decided to stop making them altogether, according to Reuters, and hundreds of jobs across France, the US, and the UK have been cut.

Experiments to roll out the robot across supermarkets and offices haven’t always gone well. In 2018, a Scottish supermarket fired Pepper after it freaked shoppers out and often told them to look "in the alcohol section" for unrelated items, it was reported.

The World Health Organization's AI ethics guideline

The WHO published a 165-page report outlining an ethics and governance framework for AI in health, this week.

It's founded on six main principles that the org hopes "can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders – in the public and private sector – accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use."

Those six principles are:

  1. Protecting autonomy: Machines may automate tasks and generate results, but humans should always remain in charge of the systems and oversee all medical decisions.
  2. Promoting human safety and well-being and safety and the public interest: Make sure that the effects of computer algorithms are studied and regulated to make sure they don't harm people.
  3. Ensuring transparency, explainability and intelligibility: The technology must be understandable to everyone that uses or is affected by it, whether it's the developers, healthcare professionals, or patients.
  4. Fostering responsibility and accountability: Understand the limits of AI technology and where it may go wrong. Make sure that someone can be held responsible if that's the case.
  5. Ensuring equity and inclusivity: AI should not be biased or perform less well against age, sex, gender, income, race, ethnicity, sexual orientation, and so on.
  6. Promoting tools that are responsive and sustainable: Machine learning software should be designed to as computationally efficient as possible.

Facebook upgrades its research dataset to help developers build house robots

Chores are mundane and no one in their right mind likes doing the dishes or laundry, really. Humans are going to have to keep doing them unfortunately until AI robots get nimble and smart enough to take over.

Simple tasks like picking up cups, putting them in the dishwasher, or in cupboards, might be easy for us but they’re incredibly difficult for machines. Roboticists can dream about building the perfect algorithm or neural net, but without any training data it’ll be no good.

That’s why Facebook released AI Habitat, a dataset containing multiple simulations of images modelling inside people’s houses to help developers in 2019. Now, it has upgraded that to Habitat 2.0, which contains 111 unique, 3D-rendered layouts of rooms containing 92 objects, like drawers, carpets, sofas, plants, fruit, and the like.

Future AI agents can be trained to complete a specific chore in simulation to gain enough experience before they’re tested in real-world conditions. What’s most interesting is that robots designed to tidy homes in the US will probably perform differently in other countries, where the style of homes or jobs vary.

“In the future, Habitat will seek to model living spaces in more places around the world, enabling more varied training that takes into account cultural- and region-specific layouts of furniture, types of furniture, and types of objects,” said Dhruv Batra, a Facebook research scientist.

You can read more about the dataset here. ®

More about

TIP US OFF

Send us news


Other stories you might like