You better explain yourself, mister: DARPA's mission to make an accountable AI

You did WHAT? WHY!


AI control

In the case of the DARPA research programme, one of the goals is to come up with machine learning techniques that can be used in the development of AI control systems for autonomous vehicles to be operated by the armed forces in future. In such a case, the military needs to be sure that the system is focusing on the right details, and be able to swiftly investigate and correct the system if it goes wrong.

"There, we envision a soldier sending one of these things off on a mission, and it'll come back, and then it'll have to explain why it did or didn't make the decisions that it made on the mission," said David Gunning, the programme manager overseeing the XAI project. Although DARPA is funded by the US Department of Defense, theprogramme involves researchers drawn from various academic institutions who will be free to publish the results of their work so that anyone will have access to the techniques they have developed.

Researchers have been following three broad strategies. The first is deep explanation, whereby modified deep learning techniques are developed that are capable of producing an explainable model. This could be achieved by forcing the neural network to associate nodes at one level within the network with semantic attributes that humans can understand, for example.

"We've got a lot of interesting proposals there, and deep learning is a hugely active research area, both in industry and universities, so there are a lot of interesting ideas on how you might produce a more explainable model from deep learning," Gunning said.

Decisions, decision

The second strategy is to use a different machine learning technique, such as a decision tree, that will produce an interpretable model.

"People are working on advanced versions of decision trees, like a Bayesian rule list is one of the techniques, so they can perform a fairly complex machine learning process, but what they produce is just a linear decision tree, so if you're trying to predict whether someone will have a heart attack, you might say that if he's over 50, his probability is 50 per cent, if he also has high blood pressure, it's now 60 per cent. And so you end up with a nice, organised human-readable version of the features that the system thinks are most important," explained Gunning.

The third approach is known as model induction, and is basically a version of the black box testing method. An external system runs millions of simulations, feeding the machine learning model with a variety of different inputs and seeing if it can infer a model that explains what the system's decision logic is. Of the three approaches, only the latter one can be applied retrospectively. In other words, if a developer has already started building and training a machine learning system, it is probably too late to adapt it to a deep explanation model. Developers would need to start out with the goal of producing an explainable AI from the beginning.

A further consideration for explainable AI is how the system can convey its decision-making process to the human operator through some form of user interface. Here, the choice is complicated by the application as well as the type of machine learning technique that has been chosen.

logo for mcubed conference

Silicon brains ready to plug into London

READ MORE

For example, the system might be designed with a natural language generator, perhaps based around something like a recurrent neural network, to generate a narrative that describes the steps that led to its output, while a system developed for image recognition tasks may be trained to highlight areas of the image to indicate the details it was focusing on.

However, one current issue with explainable AI is that there is a trade-off between explainability and performance: the highest performing machine learning models such as deep learning using layers of neural networks are typically the least explainable. This means that developers will have to make a decision about how important explainability is versus performance for their particular application.

"If you're just searching for cat videos on Facebook, explainability may not be that important. But if you're going to give recommendations to a doctor or a soldier in a much more critical situation, then explainability will be more important, and we’re hoping that we will soon have better techniques for developers to use in that case, so they can produce that explanation," said Gunning.

In other words, explainable AI is possible, but whether you need it or not depends on the application. And if that application has an impact on people's lives, it may only be a matter of time before the law demands that it be accountable. ®

Similar topics


Other stories you might like

  • Uncle Sam to clip wings of Pegasus-like spyware – sorry, 'intrusion software' – with proposed export controls

    Surveillance tech faces trade limits as America syncs policy with treaty obligations

    More than six years after proposing export restrictions on "intrusion software," the US Commerce Department's Bureau of Industry and Security (BIS) has formulated a rule that it believes balances the latitude required to investigate cyber threats with the need to limit dangerous code.

    The BIS on Wednesday announced an interim final rule that defines when an export license will be required to distribute what is basically commercial spyware, in order to align US policy with the 1996 Wassenaar Arrangement, an international arms control regime.

    The rule [PDF] – which spans 65 pages – aims to prevent the distribution of surveillance tools, like NSO Group's Pegasus, to countries subject to arms controls, like China and Russia, while allowing legitimate security research and transactions to continue. Made available for public comment over the next 45 days, the rule is scheduled to be finalized in 90 days.

    Continue reading
  • Global IT spending to hit $4.5 trillion in 2022, says Gartner

    The future's bright, and expensive

    Corporate technology soothsayer Gartner is forecasting worldwide IT spending will hit $4.5tr in 2022, up 5.5 per cent from 2021.

    The strongest growth is set to come from enterprise software, which the analyst firm expects to increase by 11.5 per cent in 2022 to reach a global spending level of £670bn. Growth has fallen slightly, though. In 2021 it was 13.6 per cent for this market segment. The increase was driven by infrastructure software spending, which outpaced application software spending.

    The largest chunk of IT spending is set to remain communication services, which will reach £1.48tr next year, after modest growth of 2.1 per cent. The next largest category is IT services, which is set to grow by 8.9 per cent to reach $1.29tr over the next year, according to the analysts.

    Continue reading
  • Memory maker Micron moots $150bn mega manufacturing moneybag

    AI and 5G to fuel demand for new plants and R&D

    Chip giant Micron has announced a $150bn global investment plan designed to support manufacturing and research over the next decade.

    The memory maker said it would include expansion of its fabrication facilities to help meet demand.

    As well as chip shortages due to COVID-19 disruption, the $21bn-revenue company said it wanted to take advantage of the fact memory and storage accounts for around 30 per cent of the global semiconductor industry today.

    Continue reading

Biting the hand that feeds IT © 1998–2021