While Google agonizes over military AI, IBM is happy to pick up the slack, even for the Chinese military

Big Blue insists it's only for good ol' fashioned research


IBM shared its controversial Diversity in Faces dataset, used to train facial recognition systems, with companies and universities directly linked to militaries and law enforcement across the world.

The dataset contains information scraped from a million images posted on Flickr, a popular image sharing website, under the Creative Commons license. It was released by IBM in the hopes that it could help make facial recognition systems less biased.

In an email obtained by The Register, IBM sent a link to the dataset this week to a list of people that had been granted access to use the data after applying for permission via an online questionnaire. “In case you had trouble with the download link for the DiF dataset, please download the dataset from this new link (zip file, ~376MB),” IBM’s Research Diversity in Faces (DiF) team wrote.

The team proceeded to send the email to hundreds of email addresses, revealing the names of people who were given access to the dataset. The majority of people were students and academics from universities from the US, China, Italy, Japan, United Kingdom, Canada, Australia, New Zealand, and more.

Several of them are from private organisations. Some are from seemingly innocuous industries like a contact lens or online gaming company, others are more suspicious, however, and are focused on building biometric and surveillance systems. A few of them are directly affiliated with the military and law enforcement.

Two employees from MITRE, a R&D company funded by the US government with links to the Department of Defense and Department of Homeland Security (DHS), were sent the link to the DiF dataset. MITRE is currently working on a range of projects, including identifying biometric facial identification from photos and videos.

Another organisation on the list was Maryland Testing Facility, it supports the DHS by testing a variety of biometric systems, including equipment such as iris scanners are tested on participants in environments that mimic airport security. One company, FaceFirst boasted it was the “market leader in robust facial recognition software for law enforcement, including police, highway patrol, sheriff departments and other public safety agencies.”

It’s no secret that the DHS are interested in using AI to identify people of interest from databases in video feeds. Documents published by the Project on Government Oversight (POGO), a non-profit organization investigating the US federal government, revealed that a representative from the US Immigration and Customs Enforcement was interested in Amazon’s commercial facial recognition system, Rekognition.

Other institutions stick out too. There are researchers from Google, Microsoft, Amazon and Huawei on the list. Someone from the Military Academy, an establishment akin to a university that trains officers for the Portuguese Army also had access to the dataset. As did a PhD student from China’s National University of Defense Technology, working under the under the Chinese government’s Central Military Commission. We have asked them for comment.

China has prioritised AI to boost its economy and national security. It has been particularly swift at adopting facial recognition in airports, restaurants, and on its public streets and has used the same technology to crackdown on its Muslim population.

“Given how the Chinese government has been using AI facial recognition systems in Xinjiang, in the perpetration of large-scale human rights abuses against Uighur Muslims, it’s definitely possible that the training data released by IBM could be used for insidious AI applications,” Justin Sherman, a cybersecurity policy fellow at New America, a Washington, DC-based think tank, told The Register.

facial_recognition

Brouhaha over IBM using Flickr faces for AI training, big trouble in not-so-little China for Microsoft, and more

READ MORE

“But that doesn’t necessarily mean IBM shouldn’t release this kind of data, or that other companies or groups should put a stop to the open nature of AI research. Most AI applications are dual-use—they have both civilian and military applications—and so it may be impossible to distinguish between what is and is not a ‘dangerous’ AI application or component.”

Under the Terms of Use, IBM states that the dataset can only be used for “non-commercial, research purposes only”. The application form also requests that the dataset be deleted after it’s used. How IBM plan to enforce these rules is not clear, however.

An IBM spokesperson told us: “Our goal with Diversity in Faces has been to advance scientific progress on a technology about which society is asking important questions. We provided the information only to researchers who indicated that their use of it would be consistent with our objective: making the technology more accurate and fair.”

But tackling biases doesn’t solve the problem of malicious uses. “What people often neglect about the discussion of bias in AI is that even if it were possible to totally eliminate, which it’s not, this would not make facial recognition any less problematic in its implications for impact on people’s lives,” said Liz O’Sullivan, former head of annotations at AI startup Clarifai. ®

Broader topics


Other stories you might like

  • DuckDuckGo tries to explain why its browsers won't block some Microsoft web trackers
    Meanwhile, Tails 5.0 users told to stop what they're doing over Firefox flaw

    DuckDuckGo promises privacy to users of its Android, iOS browsers, and macOS browsers – yet it allows certain data to flow from third-party websites to Microsoft-owned services.

    Security researcher Zach Edwards recently conducted an audit of DuckDuckGo's mobile browsers and found that, contrary to expectations, they do not block Meta's Workplace domain, for example, from sending information to Microsoft's Bing and LinkedIn domains.

    Specifically, DuckDuckGo's software didn't stop Microsoft's trackers on the Workplace page from blabbing information about the user to Bing and LinkedIn for tailored advertising purposes. Other trackers, such as Google's, are blocked.

    Continue reading
  • Despite 'key' partnership with AWS, Meta taps up Microsoft Azure for AI work
    Someone got Zuck'd

    Meta’s AI business unit set up shop in Microsoft Azure this week and announced a strategic partnership it says will advance PyTorch development on the public cloud.

    The deal [PDF] will see Mark Zuckerberg’s umbrella company deploy machine-learning workloads on thousands of Nvidia GPUs running in Azure. While a win for Microsoft, the partnership calls in to question just how strong Meta’s commitment to Amazon Web Services (AWS) really is.

    Back in those long-gone days of December, Meta named AWS as its “key long-term strategic cloud provider." As part of that, Meta promised that if it bought any companies that used AWS, it would continue to support their use of Amazon's cloud, rather than force them off into its own private datacenters. The pact also included a vow to expand Meta’s consumption of Amazon’s cloud-based compute, storage, database, and security services.

    Continue reading
  • Atos pushes out HPC cloud services based on Nimbix tech
    Moore's Law got you down? Throw everything at the problem! Quantum, AI, cloud...

    IT services biz Atos has introduced a suite of cloud-based high-performance computing (HPC) services, based around technology gained from its purchase of cloud provider Nimbix last year.

    The Nimbix Supercomputing Suite is described by Atos as a set of flexible and secure HPC solutions available as a service. It includes access to HPC, AI, and quantum computing resources, according to the services company.

    In addition to the existing Nimbix HPC products, the updated portfolio includes a new federated supercomputing-as-a-service platform and a dedicated bare-metal service based on Atos BullSequana supercomputer hardware.

    Continue reading

Biting the hand that feeds IT © 1998–2022