Don't fall for the hype around OpenAI's Rubik's Cube playing robot, Berkeley bans facial recognition, and more

All in a week's work


Roundup Just in case you're addicted to the world of AI, here's more news beyond what we have already covered this week.

Yay or nay - OpenAI’s robot Rubik’s Cube hand: Some in the AI community have been gushing over OpenAI’s most recent video showing a mechanical hand deftly solving a Rubik’s Cube, even when it was being disturbed by a stuffed giraffe toy.

Here’s the video below. It’s well produced, colourful, and looks pretty impressive at first. Most people use two hands to play with a Rubik’s Cube, and here’s a robot solving it with just one hand.

Youtube Video

But a closer inspection of the research paper [PDF] reveals that the robot, known as Dactyl, could only complete the puzzle from a fully scrambled state 20 per cent of the time – that’s just two times out of the ten trials the researchers performed.

The success rate was higher at 60 per cent when the cube was half-scrambled – a state that involves just 15 moves instead of the full 26 to fully crack the challenge. All that also depends on the robot not dropping the toy, too, which happened 80 per cent of the time in testing.

There are superior algorithms that can solve a Rubik’s Cube faster. The focus of attention here should not be on the puzzle-solving technique, though, but rather the training of a robot hand.

OpenAI taught Dactyl using reinforcement learning, a method that teaches an agent how to perform a specific task over trial and error. The bot is rewarded every time it makes a good move that gets it closer to cracking the Rubik’s cube, a bonus award when it’s fully solved, and a negative reward when it drops the toy. It played with the Rubik’s Cube for an equivalent of 10,000 years during training – an amount that obviously surpasses many human lifetimes.

Crucially, a non-AI algorithm was used to solve the cube, and reinforcement learning was used to perform the robotic manipulation of the gizmo. Again, the real focus of attention should be on training the robot hand, not whether it's any good at solving a Rubik's Cube efficiently.

“Since May 2017, we’ve been trying to train a human-like robotic hand to solve the Rubik’s Cube,” the AI research lab said this week.

“We set this goal because we believe that successfully training such a robotic hand to do complex manipulation tasks lays the foundation for general-purpose robots. We solved the Rubik’s Cube in simulation in July 2017. But as of July 2018, we could only manipulate a block on the robot. Now, we’ve reached our initial goal.”

While this is a fascinating and important step forward for machine learning, we’re not so sure that having a 20 per cent success rate really counts as having fully solved a problem or if playing with a Rubik’s Cube gets us any closer to general-purpose robots. AI robots still need to be carefully trained to do a specific task: just because one of them can rotate a Rubik’s Cube it doesn’t mean it can, say, roll a dice.

Berkeley has banned facial recognition: Berkeley in California has become the latest US city to ban the governmental use of facial recognition technology.

After Berkeley City Council unanimously voted in support of an ordinance that prevents the technology being used by government agencies, including the local police, this week. Now, it joins San Francisco and Oakland in California, as well as Sommerville, Massachusetts, who have also banned facial recognition too.

“We cannot afford to write off the various performance issues related to facial recognition technology as mere engineering problems; facial recognition surveillance poses a range of fundamental constitutional problems,” said Kate Harrison, a councilmember who pushed the ordinance, according to The Mercury News, a local Bay Area publication. “In the face of federal and state inaction, it is incumbent upon cities to enact laws that protect communities from mass surveillance.”

The ordinance is spearheaded by the well known shortcomings of facial recognition. Machine learning models often struggle with identifying women and people of darker skin as accurately as white men due to the lack of representation in skewed datasets, making it inappropriate to use by law enforcement and other governmental agencies.

Play the COMPAS game! Remember the dodgy COMPAS algorithm that was being used in courtrooms to decide how likely it is that an accused criminal would recommit a crime?

Well if you don’t, here’s Pro Publica’s investigation from 2016 that showed that black people were more likely to score higher than any other group. Now, MIT Tech Review have taken the same dataset analysed by Pro Publica, which includes 7,200 profiles containing people’s names, race, age and their risk scores as calculated by COMPAS, and turned it into an interactive game.

Each defendant is represented as a dot and is sorted into ten bins, each marked 1 to 10. One signifies that there is 10 per cent chance of reoffending, five is 50 per cent, and ten is 100 per cent.

Players are then guided through different scenarios and asked to find the limit, where most of the defendants who were marked as a high chance of reoffending were indeed arrested for recommitting a crime. That would mean the algorithm was accurate.

But as you keep playing, the threshold becomes trickier to find. There are some people that have been judged too harshly by COMPAS and have been put in jail for longer than necessary, while others have been set free only to reoffend. The game becomes even more confusing when race is involved and the accuracy scores drop more drastically for black people compared to white people.

You can play it here. ®

Broader topics


Other stories you might like

  • Talos names eight deadly sins in widely used industrial software
    Entire swaths of gear relies on vulnerability-laden Open Automation Software (OAS)

    A researcher at Cisco's Talos threat intelligence team found eight vulnerabilities in the Open Automation Software (OAS) platform that, if exploited, could enable a bad actor to access a device and run code on a targeted system.

    The OAS platform is widely used by a range of industrial enterprises, essentially facilitating the transfer of data within an IT environment between hardware and software and playing a central role in organizations' industrial Internet of Things (IIoT) efforts. It touches a range of devices, including PLCs and OPCs and IoT devices, as well as custom applications and APIs, databases and edge systems.

    Companies like Volvo, General Dynamics, JBT Aerotech and wind-turbine maker AES are among the users of the OAS platform.

    Continue reading
  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading

Biting the hand that feeds IT © 1998–2022