This article is more than 1 year old
How DARPA wants to rethink the fundamentals of AI to include trust
Would you trust your life to the current generation of AIs? Yeah, we wouldn't either
Comment Would you trust your life to an artificial intelligence?
The current state of AI is impressive, but seeing it as bordering on generally intelligent is an overstatement. If you want to get a handle on how well the AI boom is going, just answer this question: Do you trust AI?
Google's Bard and Microsoft's ChatGPT-powered Bing large language models both made boneheaded mistakes during their launch presentations that could have been avoided with a quick web search. LLMs have also been spotted getting the facts wrong and pushing out incorrect citations.
It's one thing when those AIs are just responsible for, say, entertaining Bing or Bard users, DARPA's Matt Turek, deputy director of the Information Innovation Office, tells us. It's another thing altogether when lives are on the line, which is why Turek's agency has launched an initiative called AI Forward to try answering the question of what exactly it means to build an AI system we can trust.
Trust is …?
In an interview with The Register, Turek said he likes to think of building trustworthy AI with a civil engineering metaphor that also involves placing a lot of trussed trust in technology: Building bridges.
"We don't build bridges by trial and error anymore," Turek says. "We understand the foundational physics, the foundational material science, the system engineering to say, I need to be able to span this distance and need to carry this sort of weight," he adds.
Armed with that knowledge, Turek says, the engineering sector had been able to develop standards that make building bridges straightforward and predictable, but we don't have that with AI right now. In fact, we're in an even worse place than simply not having standards: The AI models we're building sometimes surprise us, and that's bad, Turek says.
"We don't fully understand the models. We don't understand what they do well, we don't understand the corner cases, the failure modes … what that might lead to is things going wrong at a speed and a scale that we haven't seen before."
Reg readers don't need to imagine apocalyptic scenarios in which an artificial general intelligence (AGI) begins killing humans and waging war to get Turek's point across. "We don't need AGI for things to go significantly wrong," Turek says. He cites flash market crashes, such the 2016 drop in the British pound, attributed to bad algorithmic decision making, as one example.
Then there's software like Tesla's Autopilot, ostensibly an AI designed to drive a car that's has been allegedly connected with 70 percent of accidents involving automated driver assist technology. When such accidents happen, Tesla doesn't blame the AI, Turek tell us, it says drivers are responsible for what Autopilot does.
By that line of reasoning, it's fair to say even Tesla doesn't trust its own AI.
How DARPA wants to move AI ... Forward
"The speed at which large scale software systems can operate can create challenges for human oversight," Turek says, which is why DARPA kicked off its latest AI initiative, AI Forward, earlier this year.
In a presentation in February, Turek's boss, Dr Kathleen Fisher, explained what DARPA wants to accomplish with AI Forward, namely building that base of understanding for AI development similar to what engineers have developed with their own sets of standards.
Fisher explained in her presentation that DARPA sees AI trust as being integrative, and that any AI worth placing one's faith in should be capable of doing three things:
- Operating competently, which we definitely haven't figured out yet,
- Interacting appropriately with humans, including communicating why it does what it does (see the previous point for how well that's going),
- Behaving ethically and morally, which Fisher says would include being able to determine if instructions are ethical or not, and reacting accordingly.
Articulating what defines trustworthy AI is one thing. Getting there is quite a bit more work. To that end, DARPA said it plans to invest its energy, time and money in three areas: Building foundational theories, articulating proper AI engineering practices and developing standards for human-AI teaming and interactions.
- Microsoft's new AI BingBot berates users and can't get its facts straight
- Thanks to generative AI, catching fraud science is going to be this much harder
- Research raises questions: Are instruments taken to Mars sensitive enough to find life?
- How to shave years off the journey from military lab to real-world use
AI Forward, which Turek describes as less of a program and more a community outreach initiative, is kicking off with a pair of summer workshops in June and late July to bring people together from the public and private sectors to help flesh out those three AI investment areas.
DARPA, Turek says, has a unique ability "to bring [together] a wide range of researchers across multiple communities, take a holistic look at the problem, identify … compelling ways forward, and then follow that up with investments that DARPA feels could lead toward transformational technologies."
For anyone hoping to toss their hat in the ring to participate in the first two AI Forward workshops – sorry – they're already full. Turek didn't reveal any specifics about who was going to be there, only saying that several hundred participants are expected with "a diversity of technical backgrounds [and] perspectives."
What does trustworthy defense AI look like?
If and when DARPA manages to flesh out its model of AI trust, how exactly would it use that technology?
Cybersecurity applications are obvious, Turek says, as a trustworthy AI could be relied upon to make the right decisions at a scale and speed humans couldn't act on. From the large language model side, there's building AI that can be trusted to properly handle classified information, or digest and summarize reports in an accurate manner "if we can remove those hallucinations," Turek adds.
And then there's the battlefield. Far from only being a tool used to harm, AI could be turned to lifesaving applications through research initiatives like In The Moment, a research project Turek leads to support rapid decision-making in difficult situations.
The goal of In The Moment is to identify "key attributes underlying trusted human decision-making in dynamic settings and computationally representing those attributes," as DARPA describes it on the project's page.
"[In The Moment] is really a fundamental research program about how do you model and quantify trust and how do you build those attributes that lead to trust and into systems," Turek says.
AI armed with those capabilities could be used to make medical triage decisions on the battlefield or in disaster scenarios.
DARPA wants white papers to follow both of its AI Forward meetings this summer, but from there it's a matter of getting past the definition stage and toward actualization, which could definitely take a while.
"There will be investments from DARPA that come out of the meetings," Turek tells us. "The number or the size of those investments is going to depend on what we hear," he adds. ®