This article is more than 1 year old

Seeing a robot dog tagging along with NYPD officers after an arrest stuns New Yorkers

Plus: 'First civil lawsuit' against police for incorrect facial recognition match in wrongful collaring, and more

In brief Bystanders in New York City were stunned this week when cops left a public housing complex with a handcuffed man and a robot law enforcement dog trotting after them.

The four-legged machine – shown below – was built by Boston Dynamics, and has been dispatched to crime scenes across the American metropolis since October, according to Gothamist.

Police said they had brought the robo doggo along to the domestic incident though it was not used. The man was cuffed for possession of weapons, it was reported. It’s not clear how Boston Dynamics’ canine devices can actually help police departments and what NYPD has in mind for them, though we can imagine them being sent into a room or building with cameras attached to scope out a situation before officers swoop in.

Still, it has caught the attention of the Big Apple's denizens. A non-profit org based in New York City and focused on privacy, The Surveillance Technology Oversight Project (S.T.O.P), condemned the use of the technology. “The NYPD is turning bad science fiction into real life,” its executive director Albert Fox Cahn, said in a statement. “There is no reason why our city should be spending $70,000 on yet another invasive surveillance tool.”

Lawsuit filed following wrongful arrest from facial-recognition

A dad has sued the Detroit Police Department after he was incorrectly identified as a thief by machine-learning algorithms and arrested.

It’s the first time such a civil case has been filed, according to the American Civil Liberties Union. Robert Williams was nabbed by officers last year, and spent the night behind bars for a crime he didn’t commit. He was accused of stealing watches from a store, and was inaccurately identified as the perp when the police ran the shop's CCTV image of the thief through facial-recognition technology.

“I came home from work and was arrested in my driveway in front of my wife and daughters, who watched in tears, because a computer made an error,” Williams said in a statement. “This never should have happened, and I want to make sure that this painful experience never happens to anyone else.”

The ACLU and the University of Michigan Law School’s Civil Rights Litigation Initiative have banded together to sue the DPD on behalf of Williams. The lawsuit claims that Williams’ Fourth Amendment rights were violated and that his wrongful arrest infringes upon the Michigan Elliott-Larsen Civil Rights Act. Lawyers are seeking damages and an end to the misuse of AI technology by the cops.

Ban gender-detection algorithms, say LGBTQ+ activists

Digital rights and LGBT groups have urged the European Commission to ban all facial-recognition models that predict gender.

All Out, Access Now, and Reclaim Your Face have teamed up to warn the public about the potential harms of such technologies just before the EU is due to publish legal guidelines for AI systems next week. Petitions launched by these organizations have received tens of thousands of signatures already, Gay Times first reported.

“With governments, police forces and corporations around the world increasingly using artificial intelligence to predict gender and sexual orientation by analyzing your name, how you look and sound, or how you move, queer people risk getting overly monitored, unfairly targeted, and discriminated against,” Yuri Guaiana, senior campaigns manager at All Out, said.

Facial recognition models are often trained on binary gender labels – male or female – and fail to represent queer, non-binary, and trans folk, who don’t identify with these two categories. The software often learns biased features, such as associating make-up with being female or short hair with being male and are less accurate when people don’t fit the norm.

Responsible ML at Twitter

Twitter has launched a new responsible machine-learning initiative to monitor and track the effects of algorithms deployed on its social media platform.

It is specifically looking at potential biases in gender and racial representation in its image-cropping algorithm, and if its timeline and content recommendations are fair.

The ML Ethics, Transparency and Accountability team promised to reveal their research to the public to better inform users, engineers, and academics about their technology. “We will share our learnings and best practices to improve the industry’s collective understanding of this topic, help us improve our approach, and hold us accountable,” it said, this week.

The project is still in its early days, though the group hopes to share its findings in research papers and blog posts. ®

More about


Send us news

Other stories you might like