UK cops run machine learning trials on live police operations. Unregulated. What could go wrong? – report

RUSI: How about some codes of practice, transparency, for starters?


The use of machine learning algorithms by UK police forces is unregulated, with little research or evidence that new systems work, a report has said.

The police, not wanting to get left behind in the march of progress or miss out on an opportunity to save some pennies, are keen to test out new technologies.

But the willingness to get started and the disparate nature of policing across the UK often means there is a lack of overall guidance or governance on their use.

In a report (PDF), published today, defence think tank RUSI called for greater regulation and for codes of practice for trials carried out in live operational environments that focus on fairness and proportionality.

London, UK - March, 2018. Police officers patrolling Leicester Square and Piccadilly Circus in central London. Pic Paolo Paradiso / Shutterstock.com

Zero arrests, 2 correct matches, no criminals: London cops' facial recog tech slammed

READ MORE

Although algorithms are used by cops in a variety of ways – perhaps most well known are automated facial recognition or to pinpoint crime hotspots – the report focused on those uses which most affect individuals. For instance, those that identify which people are more likely to reoffend, such as Durham Police’s Harm Assessment Risk Tool.

The report pointed out that, as with much nascent technology, it is hard to predict the overall impact of the use of ML-driven tools by police, and the systems may have unintended, and unanticipated, consequences.

This is exacerbated by a lack of research, meaning it's hard to definitively say how systems influence police officers' decision-making in practice or how they impact on people’s rights. The RUSI report also pointed to a limited evidence base on the efficacy or efficiency of different systems.

Some of the main concerns are algorithmic bias – as the report said, even if a model doesn’t include a variable for race, some measures, like postcodes, can be proxies for that. Durham Police recently mooted removing a postcode measure from HART.

Others include the fact models rely on police data – which is can be incomplete, unreliable and is continually updated – to make predictions, or fail to distinguish between the likelihood of someone offending or just being arrested, which is influenced by many other factors.

It's going on, in the field, but no one knows about it

But any such concerns haven’t stopped police from trying things out – and the report's authors expressed concern that these trials are going ahead, in the field, without a proper regulatory or governance framework.

The RUSI report also identified a lack of openness when it comes to such trials. The furore over police use of automated facial recognition technology – high rates of inaccuracy were only revealed through a Freedom of Information request – exemplifies this.

It called for the Home Office to establish codes of practice to govern police trials “as a matter of urgency” (although the department's lacklustre approach to biometrics, having taken half a decade to draw up a 27-page strategy doesn't bode well here).

The report also recommended a formal system of scrutiny and oversight, and a focus on ensuring accountability and intelligibility.

In this context, the use of black box algorithms, where neither the police nor the person can fully understand – or challenge – how or why a decision has been made, could damage the transparency of the overall justice process.

Different machine learning methods provide different levels of transparency, the report noted, and as such it suggested the regulatory framework should set minimum standards for transparency and intelligibility.

It also emphasised the importance of humans being involved. Forces need to demonstrate a person has provided meaningful review of the decision to ensure algorithms are only used to support, not make, a decision.

But the report noted officers might be unwilling to contradict a model that claims a high level of accuracy. (One needs only look at the continued presence of lie detectors in the US – despite a weight of evidence against them – to understand people’s willingness to accept that a piece of kit is “right”.)

As such, the report called for be a process to resolve disagreements, and for public sector procurement agreements for ML algorithms to make requirements of the providers. That includes the provider being able to retroactively deconstruct the algorithm and being able to provide an expert witness.

The report also noted the need to properly train police officers – not just so that they can use the kit, but so they fully understand its inherent limitations and can interpret the results in a fair and responsible way.

It recommended that the College of Policing develops a course for officers, along with guidance for the use of ML tools and on how to explain them to the public.

Commenting on the report, Michael Veale, a UCL academic whose focus is on responsible public sector use of ML, emphasised the need to build up evidence about whether such interventions work, and added that the government should back these efforts.

“It may surprise some readers to know that there is still a What Works Centre for Policing that could, if properly funded, play this role,” he said.

“Algorithmic interventions need testing against other investments and courses of action — not just other algorithms, and not just for bias or discrimination — to establish where priorities should lie.”

Veale also warned that there is wider organisational use of predictive technologies in policing, such as for determining staffing levels, timetables, patrols and areas for focus.

“We’ve learned from the experience with New Public Management [a model developed in the '80s to run government bodies] and the NHS over the last few decades of the danger that the gaming associated with target culture can bring — look at Mid Staffs.

"We need to be very careful that if these new technologies are put into day-to-day practices, they don’t create new gaming and target cultures,” he said. ®

We'll be examining machine learning, artificial intelligence, and data analytics, and what they mean for you, at Minds Mastering Machines in London, between October 15 and 17. Head to the website for the full agenda and ticket information.

Broader topics


Other stories you might like

  • Google keeps legacy G Suite alive and free for personal use
    Phew!

    Google has quietly dropped its demand that users of its free G Suite legacy edition cough up to continue enjoying custom email domains and cloudy productivity tools.

    This story starts in 2006 with the launch of “Google Apps for Your Domain”, a bundle of services that included email, a calendar, Google Talk, and a website building tool. Beta users were offered the service at no cost, complete with the ability to use a custom domain if users let Google handle their MX record.

    The service evolved over the years and added more services, and in 2020 Google rebranded its online productivity offering as “Workspace”. Beta users got most of the updated offerings at no cost.

    Continue reading
  • GNU Compiler Collection adds support for China's LoongArch CPU family
    MIPS...ish is on the march in the Middle Kingdom

    Version 12.1 of the GNU Compiler Collection (GCC) was released this month, and among its many changes is support for China's LoongArch processor architecture.

    The announcement of the release is here; the LoongArch port was accepted as recently as March.

    China's Academy of Sciences developed a family of MIPS-compatible microprocessors in the early 2000s. In 2010 the tech was spun out into a company callled Loongson Technology which today markets silicon under the brand "Godson". The company bills itself as working to develop technology that secures China and underpins its ability to innovate, a reflection of Beijing's believe that home-grown CPU architectures are critical to the nation's future.

    Continue reading
  • China’s COVID lockdowns bite e-commerce players
    CEO of e-tail market leader JD perhaps boldly points out wider economic impact of zero-virus stance

    The CEO of China’s top e-commerce company, JD, has pointed out the economic impact of China’s current COVID-19 lockdowns - and the news is not good.

    Speaking on the company’s Q1 2022 earnings call, JD Retail CEO Lei Xu said that the first two years of the COVID-19 pandemic had brought positive effects for many Chinese e-tailers as buyer behaviour shifted to online purchases.

    But Lei said the current lengthy and strict lockdowns in Shanghai and Beijing, plus shorter restrictions in other large cities, have started to bite all online businesses as well as their real-world counterparts.

    Continue reading
  • Foxconn forms JV to build chip fab in Malaysia
    Can't say when, where, nor price tag. Has promised 40k wafers a month at between 28nm and 40nm

    Taiwanese contract manufacturer to the stars Foxconn is to build a chip fabrication plant in Malaysia.

    The planned factory will emit 12-inch wafers, with process nodes ranging from 28 to 40nm, and will have a capacity of 40,000 wafers a month. By way of comparison, semiconductor-centric analyst house IC Insights rates global wafer capacity at 21 million a month, and Taiwanese TSMC’s four “gigafabs” can each crank out 250,000 wafers a month.

    In terms of production volume and technology, this Malaysian facility will not therefore catapult Foxconn into the ranks of leading chipmakers.

    Continue reading
  • NASA's InSight doomed as Mars dust coats solar panels
    The little lander that couldn't (any longer)

    The Martian InSight lander will no longer be able to function within months as dust continues to pile up on its solar panels, starving it of energy, NASA reported on Tuesday.

    Launched from Earth in 2018, the six-metre-wide machine's mission was sent to study the Red Planet below its surface. InSight is armed with a range of instruments, including a robotic arm, seismometer, and a soil temperature sensor. Astronomers figured the data would help them understand how the rocky cores of planets in the Solar System formed and evolved over time.

    "InSight has transformed our understanding of the interiors of rocky planets and set the stage for future missions," Lori Glaze, director of NASA's Planetary Science Division, said in a statement. "We can apply what we've learned about Mars' inner structure to Earth, the Moon, Venus, and even rocky planets in other solar systems."

    Continue reading
  • The ‘substantial contributions’ Intel has promised to boost RISC-V adoption
    With the benefit of maybe revitalizing the x86 giant’s foundry business

    Analysis Here's something that would have seemed outlandish only a few years ago: to help fuel Intel's future growth, the x86 giant has vowed to do what it can to make the open-source RISC-V ISA worthy of widespread adoption.

    In a presentation, an Intel representative shared some details of how the chipmaker plans to contribute to RISC-V as part of its bet that the instruction set architecture will fuel growth for its revitalized contract chip manufacturing business.

    While Intel invested in RISC-V chip designer SiFive in 2018, the semiconductor titan's intentions with RISC-V evolved last year when it revealed that the contract manufacturing business key to its comeback, Intel Foundry Services, would be willing to make chips compatible with x86, Arm, and RISC-V ISAs. The chipmaker then announced in February it joined RISC-V International, the ISA's governing body, and launched a $1 billion innovation fund that will support chip designers, including those making RISC-V components.

    Continue reading

Biting the hand that feeds IT © 1998–2022