Regulate This! Time to subject algorithms to our laws

A Minority Report future awaits

Opinion Algorithms are almost as pervasive in our lives as cars and the internet. And just as these modes and mediums are considered vital to our economy and society, and are therefore regulated, we must ask whether it's time to also regulate algorithms.

Let's accept that the rule of law is meant to provide solid ground upon which our society can function. Some laws stop us taking each other's stuff (property, liberty, lives) while others help us swap our stuff in a way that's fair to the parties involved (property, liberty, time).

The idea of regulating algorithms has gained traction even in Parliament – and not without cause. The idea behind such regulation is often very much in line with other laws: that without oversight and legal culpability they could be deleterious to the whole business we suffer through of living alongside each other, or swapping stuff.

Baron Timothy Clement-Jones in February said artificial intelligence algorithms required "huge consideration" of their "ethics".

Clement-Jones' fellow in the House of Lords, Baroness Byford, told the chamber: "According to a recent radio programme, algorithms are used to make individual decisions in the fields of employment, housing, health, justice, credit and insurance.

"I had heard that employers are increasingly studying social media to find out more about job applicants. I had not realised that an algorithm, programmed by an engineer, can, for example, take the decision to bin an application."

Such concerns are not unique to Parliament. Speaking to The Register in March, UCL's Dr Hannah Fry warned we needed to be wary of algorithms behind closed doors.

In the hands of a few programmers who have no accountability for the decisions that they're making

The issue, she noted, is that without access to seeing how such algorithms function "you can't argue against them" when they provide dodgy results.

"If their assumptions and biases aren't made open to scrutiny then you're putting a system in the hands of a few programmers who have no accountability for the decisions that they're making," Fry said.

She explained how algorithms about predicting re-offending rates for individuals in the US are being used in sentencing, where the analysis of such data has very serious consequences.

"An example I use in my talk is of a young man who was convicted of the statutory rape of a young girl – it was a consensual act, but still a statutory crime – and his data was put into this recidivism algorithm and that was used in his sentencing. Because he was so young and it was a sex crime, it judged him to have a higher rate of offending and so he got a custodial sentence," she said.

"But if he had been 36 instead of 19, he would have received a more lenient sentence, though by any reasonable metric, one might expect a 36-year-old to receive a more punitive sentence."

Legislative action has been suggested. Last year, Labour's industrial spokesperson and shadow minister, Chi Onwurah, told The Guardian in an interview that "algorithms aren't above the law" and that as "the outcomes of algorithms are regulated – the companies which use them have to meet employment law and competition law. The question is, how do we make that regulation effective when we can't see the algorithm?"

Meanwhile, the European Union's commissioner for competition, Margrethe Vestager, urged competition enforcers to keep an eye out for cartels that use software "to work more effectively" as cartels.

In a speech about algorithms and competition, she stated: "We're not yet dealing with an algorithm quite as smart as [Hitchiker's Guide to the Galaxy's] Deep Thought. But we do have computers that are more powerful than many of us could have imagined a few years ago. And clever algorithms put that power – quite literally – in our hands."

So what is regulation, and how do we do it?

The immediate answer to many of these concerns is to reveal biases in algorithms by opening them up to public scrutiny. This has been the most fundamental of all human political activities since the Enlightenment — to observe and to measure the expression of power in society.

And yet, if the last decades of open-source software have taught us anything, it is that simple availability does not incentivise investigation. Very old vulnerabilities are constantly being found in software which had certainly been in use long enough for such vulnerabilities to have been discovered earlier.

The House of Lords debate earlier this year centred around a proposed amendment to the Digital Economy Bill, which would have given Ofcom the power to "carry out and publish evaluations of algorithms," but unlike the strict definitions of data protection that allow the Information Commissioner's Office to enforce the Data Protection Act, there are rarely specifically defined aims and intentions for algorithms which their performance could be measured against.

The increasing popularity of machine learning algorithms will make this problem more apparent. When an organisation doesn't know what it wants from an algorithm, how can it measure what the results are? And how will unintended results be noticed and reported to the regulator?

One such method could be to require organisations using algorithms to retain records on all of the data they are using, and to reappraise previous findings whenever updates are imparted. This would be expensive in the first place, and the results of reappraisal could be extreme.

Who would be liable for a man wrongfully given a harsh prison sentence, or a family denied a mortgage when house prices were affordable for them?

Your suggestions are welcome. ®

Broader topics

Other stories you might like

  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading
  • Amazon can't channel the dead, but its deepfake voices take a close second
    Megacorp shows Alexa speaking like kid's deceased grandma

    In the latest episode of Black Mirror, a vast megacorp sells AI software that learns to mimic the voice of a deceased woman whose husband sits weeping over a smart speaker, listening to her dulcet tones.

    Only joking – it's Amazon, and this is real life. The experimental feature of the company's virtual assistant, Alexa, was announced at an Amazon conference in Las Vegas on Wednesday.

    Rohit Prasad, head scientist for Alexa AI, described the tech as a means to build trust between human and machine, enabling Alexa to "make the memories last" when "so many of us have lost someone we love" during the pandemic.

    Continue reading

Biting the hand that feeds IT © 1998–2022