This article is more than 1 year old

Before we lose our minds over sentient AI, what about self-driving cars that can't detect kids crossing the road?

Uncle Sam needs to step in and audit machine-learning systems, House committee told

US House reps on Wednesday grilled a panel of experts on the various impacts artificial intelligence are likely to have on society, privacy, ethics, and so forth, and what can be done about it, if anything.

The range of questions from House Committee on Science, Space, and Technology revealed the breadth and depth of the matter at hand: the fact that machine-learning breakthroughs affect a wide number of people, and that American lawmakers have varying levels of understanding of AI tech. Some queries, such as what can be done to counter biased training data, were to be expected, though other concerns, such as that of the rise of sentient machines, were a bit bizarre.

Chairwoman Eddie Johnson (D-TX) led the debate. “Artificial intelligence systems can be a powerful tool for good, but they also carry risk," she said. "The systems have been shown to exhibit gender discrimination when placing job ads, racial discrimination in predictive policing, and socioeconomic discrimination when selecting zip codes for commercial products and services."

The issues Johnson described aren’t necessarily technical. Machine-learning models work the way they’re supposed to, learning patterns from data being fed into it. It’s the people behind the machines that are the problem. Meredith Whittaker, cofounder of the AI Now Institute, a research organization studying the social impacts of the emergent technology at New York University, urged Congress to ask: “Who benefits from AI? Who gets harmed? And who gets to decide?”

Voice recognition systems, for example, are better at understanding male voices over female ones. Facial recognition models struggle more with identifying people with darker skin than those with lighter skin. Joy Buolamwini, who is based at MIT Media Lab and founded the Algorithmic Justice League, summed it up as “privileged ignorance.” Computer scientists at the forefront of AI are unlikely to run into these sorts of problems because they’re more likely to be white men: they are unlikely to encounter the flaws in their own artificially intelligent systems. They are likely unaware their training data results in models that do not work well on the rest of the population.

Three smiley women standing outside

So woke: Microsoft's face-recog can now ID more people who aren't pasty white blokes

READ MORE

At the heart of this are technology giants and research powerhouses that are notoriously non-diverse, Whittaker said. The workforces of Google, Facebook, Microsoft, Amazon, and Apple lack women, people of color, and people with disabilities, relatively speaking.

“The diversity crisis in the AI industry means that women, people of color, gender minorities, and other marginalized populations are excluded from contributing to the design of AI systems, from shaping how these systems function, and from determining what problems these systems are tasked with solving,” she said.

Jack Clark, policy director at OpenAI, a machine-learning research lab based in San Francisco, said technology companies needed to have people with more diverse areas of expertise, too. AI research is normally conducted by people from narrow backgrounds: computer science, mathematics, software engineering, and similar. “Having 20 computer scientists and a lawyer” isn’t good enough, he said. “We need philosophers, social scientists, and security researchers, too.”

Even if the computer science community suddenly became more diverse overnight, however, there would still be glaring problems. “We know AI algorithms are not immune to low quality, biased data,” said Georgia Tourassi, director of health data sciences institute at Oak Ridge National Laboratory, the boffinry nerve center sponsored by the US Department of Energy.

The same questions of discrimination, privacy, and security also need to be applied to medicine. There needs to be ethical oversight over the data used to train medical systems but also during deployment too, she argued.

Fairness

So, what should the US government do about this, and by this, we mean: ensuring machine-learning algorithms and models are fair and trustworthy. Tourassi suggested “objective benchmarking” to assess trained systems. Government agencies could perform tests on an AI system’s performance, robustness, fairness, and safety, and publish or share the results. Organisations such as the National Institute of Standards and Technology could be in charge of audits, Clark added.

These impact assessments should be carried out within the public and private sectors before such systems are deployed, Whittaker urged. And companies need to “stop hiding behind the shield of corporate secrecy,” allowing these tests to be performed and publicized.

“When regulators, researchers, and the public seek to learn more, and to research and understand the potential harms of these systems, they are faced with structural barriers. The companies developing and deploying AI often exploit corporate secrecy laws, making testing, auditing, and monitoring extremely difficult, if not impossible,” she added.

AI guru Ng: Fearing a rise of killer robots is like worrying about overpopulation on Mars

READ MORE

Uncle Sam also needs to increase funding for interdisciplinary research to address deep-rooted biases in systems, rather than assuming any problems can be solved with a relatively cheap software patch, Buolamwini said. “For example, studies of automated risk assessment tools used in the criminal justice system show continued racial bias in the penal system, which cannot be remedied by technical fixes,” she added.

A small handful of Congress members tried to steer the conversation to a future where AI is less predictable and potentially more dangerous. “Let me ask the Skynet question,” one said. “Shouldn’t we worry about the emergence of consciousness in AI?”

Buolamwini responded: “The worry about conscious AI, I think, misses the real world issues of dumb AI, AI that is not well trained.” She cited a recent study that showed a computer vision model model trained to recognize pedestrians were more likely to miss children compared to adults.

If this was deployed in self driving cars, it could potentially endanger the lives of children. “So here, we were worried about AI becoming sentient, and the ones that are leading to the fatalities are the ones that aren’t even well trained.” ®

You can watch the hearing unfold below...

Youtube Video

More about

TIP US OFF

Send us news


Other stories you might like