This article is more than 1 year old
Privacy watchdogs from the UK, Australia team up, snap on gloves to probe AI-for-cops upstart Clearview
Investigation follows Canada's decision to give image-scraping biz the boot
Following Canada's lead earlier this week, privacy watchdogs in Britain and Australia today launched a joint investigation into how Clearview AI harvests and uses billions of images it scraped from the internet to train its facial-recognition algorithms.
The startup boasted it had collected a database packed with more than three billion photos downloaded from people’s public social media pages. That data helped train its facial-recognition software, which was then sold to law enforcement as a tool to identify potential suspects.
Cops can feed a snapshot of someone taken from, say, CCTV footage into Clearview’s software, which then attempts to identify the person by matching it up with images in its database. If there’s a positive match, the software links to that person’s relevant profiles on social media that may reveal personal details such as their name or where they live. It's a way to translate previously unseen photos of someone's face into an online handle so that person can be tracked down.
Now, the UK’s Information Commissioner (ICO) and the Office of the Australian Information Commissioner (OAIC) are collaborating to examine the New York-based upstart's practices. The investigation will focus “on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO said in a statement.
AWS won't sell facial-recog tool to police for a year – other law enforcement agencies are in the clear
READ MORE“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalised data environment,” it added. “No further comment will be made while the investigation is ongoing.”
In response, Clearview AI told us it "searches publicly available photos from the internet in accordance with applicable laws. It is used to help identify criminal suspects. Its powerful technology is currently unavailable in UK and Australia. Individuals in these countries can opt-out. We will continue to cooperate with UK’s ICO and Australia’s OAIC."
The move comes days after the Office of the Privacy Commissioner of Canada announced that Clearview will stop operating in Canada. The agency has been probing the startup since February to see whether its methods complied with the country’s privacy laws.
"In response to the commissioner's request, Clearview AI has ceased its operations in Canada," the AI biz told The Register today.
"We are proud of our record in assisting Canadian law enforcement to solve some of the most heinous crimes, including crimes against children. We will continue to cooperate with OPC on other related issues. In addition, Canadians will be able to opt-out of Clearview's search results."
Clearview’s last Canadian customer was the Royal Canadian Mounted Police (RCMP), which has suspended its contract indefinitely with the biz. The Privacy Commissioner of Canada also has a separate ongoing investigation into the RCMP’s use of Clearview’s facial-recognition technology.
In May, Clearview was sued in the US by the American Civil Liberties Union. At the time, the startup argued that since the images were all publicly available, it should be, somehow, protected under The First Amendment. Clearview’s lawyer Tor Ekeland told us: “Clearview AI is a search engine that uses only publicly available images accessible on the internet. It is absurd that the ACLU wants to censor which search engines people can use to access public information on the internet. The First Amendment forbids this.” ®