This article is more than 1 year old

Robocop needs reboot, $200m for AI research, UK govt knowingly deployed racist passport system – plus more

Read the latest in the amusing world of AI

Roundup It's another Reg summary of recent AI news.

UK gov launched a facial recognition system that didn’t work on darker skin: The British government knowingly rolled out a facial detection system that scans passport photos even though it was less accurate on people with darker skin.

The Home Office, a department responsible for handling immigration, security, and law enforcement, admitted this in a response to a public freedom of information request.

“User research was carried out with a wide range of ethnic groups and did identify that people with very light or very dark skin found it difficult to provide an acceptable passport photograph,” the New Scientist first reported. “However; the overall performance was judged sufficient to deploy.”

There are strict rules about passport photos, people have to wear a neutral expression with their mouths closed. In some cases, the face detection model didn’t accept these images as it mistook a black man’s lips as an open mouth or a black woman’s eyes as being closed.

The racist nature of facial recognition systems is well known. Several experiments have shown that they’re less accurate when looking at women and people of color. Although the average performance of these models might be high, it’s important to look at the test results across demographics too.

“We are determined to make the experience of uploading a digital photograph as simple as possible, and will continue working to improve this process for all of our customers,” the Home office said in a statement to the New Scientist.

What’s the point of robocops if they can’t even call the police? Expecting robots to stop drive-by shootings, robberies, or even street fights is a bit much - that’s probably above their pay grade. But when these metallic cones on wheels have the words POLICE painted on them, it’s not silly to think that maybe they can alert real law enforcement when there are real dangers, right?

Cogo Guebara definitely thought so. When she witnessed a fight break out in the parking lot of Salt Lake Park, near downtown Los Angeles, she rushed over to alert the local robot patrolling the area. The bot built by Knightscope, a security company based in Mountain View, was employed by the city of Huntington Park, known as HP Robocop, stands about five feet tall.

Its white body is painted in blue and white with the words POLICE fashioned across it. Guebara pressed the emergency alert button, hoping it would connect directly to law enforcement. But instead of helping her, the thing just yelled at her to “step out of the way,” according to NBC News.

Unfortunately, the distress calls aren’t directed to local police departments and are instead fielded back to, um, Knightscope. HP Robocop appears to be a K5 Outdoor model. “It is best suited for securing large outdoor spaces,” according to the company’s site. “Working together with human eyes, you’ll be better equipped to keep areas such as parking lots, corporate campuses and hospitals safe autonomously.”

Well, it failed in this case and real police officers were eventually called to the scene. Knightscope’s bot also made the news when one of them was found hilariously floating in a pond after it fell into the water feature in a shopping mall in Washington DC, and don’t forget the other time when it ran over a poor toddler’s foot.

A new federal AI funding program has appeared: The National Science Foundation, a US government agency that supports science and engineering research, has set aside a budget of $200m to invest in AI research over the next six years.

The cash is part of its National Artificial Intelligence Research Institutes program, which launched this week. The program will focus on funding “larger-scale, longer-term research”. Over half of it - about $120m - will go towards planning grants and be given to up to six research hubs in academia, industries, and nonprofit groups.

The planning grants will support teams with $500,000 over two years so organisations can develop strategies and build full research institutes. Established research groups will recieve larger chunks of $16 million to $20 million spread over four to five years.

Here are some of the areas the NSF is interested in:

  • Trustworthy AI
  • Foundations of Machine Learning
  • AI-Driven Innovation in Agriculture and the Food System
  • AI-Augmented Learning
  • AI for Accelerating Molecular Synthesis and Manufacturing
  • AI for Discovery in Physics

Grant proposals will be open next year in January 2020.

Another freaky deepfake and its impact on disinformation: Here’s another one of those so-called deepfake videos to creep you out. A guy on YouTube video altered his voice to read a poem out loud whilst performing impressions of celebrities and comedians, and his face changed to match the person he was mimicking.

Within seconds, his face flits from one to another to look like Robin Williams, John Malkovich, and more. You can watch the uncanniness unfolding below.

Youtube Video

Deepfakes, a type of fake content generated using AI algorithms, has slowly been proliferating the internet, causing people to freak out over its impacts in everything from revenge porn to fake news.

A recent report discovered that there were over 14,000 deepfake videos online and that most of them - 96 per cent, in fact - were pornographic. Government officials and politicians are worried these will get realistic enough to convince people of fake news, making them a threat to upcoming elections and democracy as a whole.

But as MIT Tech Review argued, concerns could backfire and lead people to question the reality of everything they see so that they start doubting the truth. ®

More about

TIP US OFF

Send us news


Other stories you might like