Teach undergrads ethics to ensure future AI is safe – compsci boffins

Read sci-fi, kids! Save the world from killer robots!


Universities should step up efforts to educate students about AI ethics, according to a panel of experts speaking at the AAAI conference in San Francisco on Monday.

Machine learning is constantly advancing as new algorithms are developed, and as hardware to accelerate computations improves. As the capabilities of AI systems increases, so do fears that this progressing technology will be abused to trample on people's privacy and other rights.

Sure, there are magazines and blogs full of academics wringing their hands about seemingly impossible conscious computers wrestling with moral dilemmas. But before we get to that point in AI development, though, there are still modern-day practical problems to consider. Say, when a program decides which medication you should take, shouldn't you be able to pick apart how it came to that conclusion? What if the prescription is based on an paid-for bias in the model in favor of a particular pharmaceutical giant?

When a machine harms a person, who is at fault? How do you, as an engineer, design your system so that a machine doesn't hurt or cause damage?

Several groups such as the Partnership on AI and The Ethics and Governance of Artificial Intelligence Fund have spawned to try to keep tech in check. More direct than that, though, undergrads should be made aware of the moral and ethical issues surrounding technology; good practices should be drilled into the next generation of engineers, the conference was told.

Robots are particularly worrying. It’s already difficult to explain decisions made by algorithms, but when they are applied to physical machines capable of directly affecting the environment, it’s no wonder that alarm bells are ringing.

More robots and AI are functioning as members of society, Ben Kuipers, a professor of computer science and engineering at the University of Michigan, said.

“We worry about robot behavior," he told the audience. "With no sense of what’s appropriate, and what’s not, they may do great harm.” Prof Kuipers uses the example of Robot from the sci-fi comedy flick Robot & Frank, who willingly lies and breaks the law in pursuit of its goals.

Even if the robot’s missions are “human-given top-level goals,” it will create subgoals and execute them in unexpected ways to fulfill its main task. To design robots to be trustworthy, a solid grounding in engineering is not enough – philosophy is needed.

Prof Kuipers pointed to the theories of utilitarianism; deontology; and virtue ethics to find useful clues for ethical theories.

Illah Nourbakhsh, a professor of robotics at Carnegie Mellon University, agreed. On his online robotics and ethics teaching guide, he wrote: “First, students need access to formal ethical frameworks that they can use to study and evaluate ethical consequence in robotics well enough to make their own well-informed decisions. Second, students need to understand the downstream impact of media-making well enough to help the field as a whole communicate with the public authentically and effectively about robotics and its ramifications on society.”

But rigid ethical frameworks aren’t always the best way to model moral problems in AI, Judy Goldsmith, a professor of computer science at the University of Kentucky, told the audience.

“Case studies are rarely memorable, emotionally gripping or subtle. There is no character development and often there’s a right answer,” she said. Prof Goldsmith prefers science fiction as it provides a “rich vein for ethical dilemmas” and an “emotional connection to stories makes discussions memorable when real-world dilemmas arise.” ®

Similar topics

Broader topics


Other stories you might like

  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading
  • Conti: Russian-backed rulers of Costa Rican hacktocracy?
    Also, Chinese IT admin jailed for deleting database, and the NSA promises no more backdoors

    In brief The notorious Russian-aligned Conti ransomware gang has upped the ante in its attack against Costa Rica, threatening to overthrow the government if it doesn't pay a $20 million ransom. 

    Costa Rican president Rodrigo Chaves said that the country is effectively at war with the gang, who in April infiltrated the government's computer systems, gaining a foothold in 27 agencies at various government levels. The US State Department has offered a $15 million reward leading to the capture of Conti's leaders, who it said have made more than $150 million from 1,000+ victims.

    Conti claimed this week that it has insiders in the Costa Rican government, the AP reported, warning that "We are determined to overthrow the government by means of a cyber attack, we have already shown you all the strength and power, you have introduced an emergency." 

    Continue reading
  • China-linked Twisted Panda caught spying on Russian defense R&D
    Because Beijing isn't above covert ops to accomplish its five-year goals

    Chinese cyberspies targeted two Russian defense institutes and possibly another research facility in Belarus, according to Check Point Research.

    The new campaign, dubbed Twisted Panda, is part of a larger, state-sponsored espionage operation that has been ongoing for several months, if not nearly a year, according to the security shop.

    In a technical analysis, the researchers detail the various malicious stages and payloads of the campaign that used sanctions-related phishing emails to attack Russian entities, which are part of the state-owned defense conglomerate Rostec Corporation.

    Continue reading

Biting the hand that feeds IT © 1998–2022