This article is more than 1 year old

You better explain yourself, mister: DARPA's mission to make an accountable AI

You did WHAT? WHY!

The US government's mighty DARPA last year kicked off a research project designed to make systems controlled by artificial intelligence more accountable to their human users.

The Defense Advanced Research Projects Agency, to give this $2.97bn agency its full name, is the Department of Defense's body responsible for emerging technology for use by the US armed forces. It's not all military applications, however. Significantly, it was DARPA's early funding of packet-switching network the Advanced Research Projects Agency Network (ARPANET) more than 40 years ago that helped bring about the internet.

Coming bang up to date, the issue at the heart of the Explainable Artificial Intelligence (XAI) programme is that AI is starting to extend into many areas of everyday life yet the internal workings of such systems are often opaque, and could be concealing flaws in their decision-making processes.

The field of AI has made great strides in the last several years, thanks to developments in machine learning algorithms and deep learning systems based on artificial neural networks (ANNs). Researchers have found that vast sets of example data are the way to train up such systems to produce the desired results, whether that is picking out a face from a photograph or recognising speech input.

But the resultant systems often turn out to operate as an inscrutable "black box" and even their developers find themselves unable to explain why it arrived at a particular decision. That may soon prove unacceptable in areas where an AI's decisions could have an impact on people's lives, such as employment, mortgage lending, or self-driving vehicles.

The main room at Continuous Lifecycle

Need to get up to speed on machine learning, AI?

READ MORE

Because of this, a number of organisations as well as DARPA have started to take an interest in making AI systems more accountable, or at least able to explain themselves so that their decision-making processes can be tweaked if necessary. Oracle, for example, has been infusing AI into services such as its cloud security and Customer Experience tools and disclosed that one of its research teams at Oracle Labs is actively working on the problem of making AI more transparent so that users can see why it made its decisions.

Microsoft has similarly been adding AI into various products, from cloud services to business intelligence to security, and chief executive Satya Nadella has gone on the record regarding the need for "algorithmic accountability" so that humans can undo any unintended harm.

This "unintended harm" can include biases in the data used to train an AI system. For example, advertising networks have been found to display fewer adverts for high-salary jobs to female candidates because the training data is skewed by a prevalence of male candidates already occupying these roles. "We believe that a responsible AI deployment should incorporate the concept of 'Explainable AI' if a company aims for its AI to be honest, fair, transparent, accountable and human-centric and that ongoing investments from the public and private sectors are essential in order to make Explainable AI a reality now," said Deborah Santiago, Accenture's managing director of Legal Services, Digital & Strategic Offerings, in a recent blog.

Not everyone agrees that there is a pressing need for greater transparency and accountability in AI systems. The value of so-called explainable AI was called into question recently by Google research director Peter Norvig, who noted that humans are not very good at explaining their decision-making either, and claimed that the performance of an AI system could be gauged simply by observing its outputs over time.

Bias

In a report, Norvig explained that a single example would not shed much light on whether the AI has some form of bias in its decision-making process. However, if you examine all of the decisions made over a wide variety of cases, the bias will become apparent.

The drawback in adopting this position is that an analysis of a system's outputs may indicate whether an AI has some internal bias, but it will not necessarily help you discover why it is happening or how to fix the issue, which a system that could explain its reasoning should be able to do.

In addition, an AI system trained on sample data may appear to be coming up with the right answers, but not always for the right reasons. A recent article in New Scientist highlighted the case of a machine learning tool developed at Vanderbilt University in Tennessee to identify cases of colon cancer from patients' electronic records. Although it performed well at first, the developer eventually discovered it was picking up on the fact that patients with confirmed cases were sent to a particular clinic rather than clues from their health records.

Having an explainable AI system should also enable such issues to be spotted and fixed during the development phase, rather than only becoming apparent after the system has been in use for a period of time. Norvig's position also flies in the face of the goal of having more "open" AI systems that can be demonstrated to be accountable, which is a vital part of promoting greater trust in AI systems from human users.

Next page: AI control

More about

TIP US OFF

Send us news


Other stories you might like