This article is more than 1 year old

While Google agonizes over military AI, IBM is happy to pick up the slack, even for the Chinese military

Big Blue insists it's only for good ol' fashioned research

IBM shared its controversial Diversity in Faces dataset, used to train facial recognition systems, with companies and universities directly linked to militaries and law enforcement across the world.

The dataset contains information scraped from a million images posted on Flickr, a popular image sharing website, under the Creative Commons license. It was released by IBM in the hopes that it could help make facial recognition systems less biased.

In an email obtained by The Register, IBM sent a link to the dataset this week to a list of people that had been granted access to use the data after applying for permission via an online questionnaire. “In case you had trouble with the download link for the DiF dataset, please download the dataset from this new link (zip file, ~376MB),” IBM’s Research Diversity in Faces (DiF) team wrote.

The team proceeded to send the email to hundreds of email addresses, revealing the names of people who were given access to the dataset. The majority of people were students and academics from universities from the US, China, Italy, Japan, United Kingdom, Canada, Australia, New Zealand, and more.

Several of them are from private organisations. Some are from seemingly innocuous industries like a contact lens or online gaming company, others are more suspicious, however, and are focused on building biometric and surveillance systems. A few of them are directly affiliated with the military and law enforcement.

Two employees from MITRE, a R&D company funded by the US government with links to the Department of Defense and Department of Homeland Security (DHS), were sent the link to the DiF dataset. MITRE is currently working on a range of projects, including identifying biometric facial identification from photos and videos.

Another organisation on the list was Maryland Testing Facility, it supports the DHS by testing a variety of biometric systems, including equipment such as iris scanners are tested on participants in environments that mimic airport security. One company, FaceFirst boasted it was the “market leader in robust facial recognition software for law enforcement, including police, highway patrol, sheriff departments and other public safety agencies.”

It’s no secret that the DHS are interested in using AI to identify people of interest from databases in video feeds. Documents published by the Project on Government Oversight (POGO), a non-profit organization investigating the US federal government, revealed that a representative from the US Immigration and Customs Enforcement was interested in Amazon’s commercial facial recognition system, Rekognition.

Other institutions stick out too. There are researchers from Google, Microsoft, Amazon and Huawei on the list. Someone from the Military Academy, an establishment akin to a university that trains officers for the Portuguese Army also had access to the dataset. As did a PhD student from China’s National University of Defense Technology, working under the under the Chinese government’s Central Military Commission. We have asked them for comment.

China has prioritised AI to boost its economy and national security. It has been particularly swift at adopting facial recognition in airports, restaurants, and on its public streets and has used the same technology to crackdown on its Muslim population.

“Given how the Chinese government has been using AI facial recognition systems in Xinjiang, in the perpetration of large-scale human rights abuses against Uighur Muslims, it’s definitely possible that the training data released by IBM could be used for insidious AI applications,” Justin Sherman, a cybersecurity policy fellow at New America, a Washington, DC-based think tank, told The Register.

facial_recognition

Brouhaha over IBM using Flickr faces for AI training, big trouble in not-so-little China for Microsoft, and more

READ MORE

“But that doesn’t necessarily mean IBM shouldn’t release this kind of data, or that other companies or groups should put a stop to the open nature of AI research. Most AI applications are dual-use—they have both civilian and military applications—and so it may be impossible to distinguish between what is and is not a ‘dangerous’ AI application or component.”

Under the Terms of Use, IBM states that the dataset can only be used for “non-commercial, research purposes only”. The application form also requests that the dataset be deleted after it’s used. How IBM plan to enforce these rules is not clear, however.

An IBM spokesperson told us: “Our goal with Diversity in Faces has been to advance scientific progress on a technology about which society is asking important questions. We provided the information only to researchers who indicated that their use of it would be consistent with our objective: making the technology more accurate and fair.”

But tackling biases doesn’t solve the problem of malicious uses. “What people often neglect about the discussion of bias in AI is that even if it were possible to totally eliminate, which it’s not, this would not make facial recognition any less problematic in its implications for impact on people’s lives,” said Liz O’Sullivan, former head of annotations at AI startup Clarifai. ®

More about

TIP US OFF

Send us news


Other stories you might like