This article is more than 1 year old

It's bizarre we're at a point where reports are written on how human rights trump AI rights

But that's what UN group has done

The protection of human rights should be front and centre of any decision to implement AI-based systems regardless of whether they're used as corporate tools such as recruitment or in areas such as law enforcement.

And unless sufficient safeguards are in place to protect human rights, there should be a moratorium on the sale of AI systems and those that fail to meet international human rights laws should be banned.

Those are just some of the conclusions from the Geneva-based Human Rights Council (HRC) in a report for the United Nations High Commissioner for Human Rights, Michelle Bachelet.

"The right to privacy in the digital age" [download] takes a close look at how AI – including profiling, automated decision-making, and other machine-learning technologies – affects people's rights.

While the report acknowledges that AI "can be a force for good," it also highlights serious concerns around how data is stored, what it's used for, and how it might be misused.

"AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people's human rights," Bachelet said in a statement.

"Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face."

The report is critical of the way governments and businesses have "often rushed to incorporate AI applications, failing to carry out due diligence," citing "numerous cases of people being treated unjustly because of AI" including being arrested due to "flawed facial recognition."

In July, the US House Committee on the Judiciary heard how facial recognition technology (FRT) is being used by law enforcement agencies in America. The hearing called on testimony from all sides of the debate as legislators seek to balance the benefits of FRT against issues such as the right to personal privacy and wrongful identification.

But it was the personal testimony of Robert Williams – who was wrongly identified, arrested, and detained all because of a "blurry, shadowy image" – that brought the debate into sharp focus.

Indeed, the issue of how AI is used in law enforcement and the criminal justice system has also been keeping the House of Lords Justice and Home Affairs Committee in the UK busy over the summer.

Most recently, Professor Elizabeth E Joh, of the UC Davis School of Law, told the committee that there are concerns over some predictive policing tools.

In some cases, Joh explained, there have been calls for technologies such as facial recognition to be banned but that attempts to do so have been "piecemeal" and not on a national scale. And with respect to some predictive policing tools, she suggested they "may not be as reliable or as effective as promised."

It's a point picked up by the HRC report, which flagged that some predictive tools "carry an inherent risk of perpetuating or even enhancing discrimination, reflecting embedded historic racial and ethnic bias in the data sets used, such as a disproportionate focus of policing of certain minorities."

The report recommends calls for human rights to be centre stage in the "development, use and governance of AI as a central objective."

It also calls for a ban on "AI applications that cannot be operated in compliance with international human rights law and impose moratoriums on the sale and use of AI systems that carry a high risk for the enjoyment of human rights, unless and until adequate safeguards to protect human rights are in place."

Bachelet said: "We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us."

Asked to comment on the HRC report, the UK Home Office declined to be drawn on the specifics but instead insisted that, when it comes to issues such as facial recognition, policy is free to change and that it is keen to ensure a "consistent approach is taken nationwide."

It pointed to last year's ruling by the Court of Appeal, which found that South Wales Police broke the law with an indiscriminate deployment of its automated facial-recognition technology in Cardiff city centre between December 2017 and March 2018.

As a result, the Home Office is updating its Surveillance Camera Code to reflect the judgment before facing Parliamentary scrutiny.

A Home Office spokesperson told us: "This government is delivering on a manifesto commitment to empower the police to use new technologies, like facial recognition to help identify and find suspects, to protect the public.

"There is a robust legal framework for the use of such technology, in keeping with last year's Court of Appeal ruling. The independent College of Policing has been consulting extensively on national guidance to ensure a consistent approach is taken nationwide."

No one from the House of Lords Justice and Home Affairs Committee was available to comment at the time of writing. ®

More about

TIP US OFF

Send us news


Other stories you might like