This article is more than 1 year old

Ghost in Musk's machines: Software bugs' autonomous joy ride

Getting serious before things get serious for self-driving vehicles

'Another potential source of bugs is neural networks'

Hand-written code is one thing, but another potential source of bugs is neural networks. Nvidia has been experimenting with cars that use such networks to learn how to drive, without, supposedly, receiving hand-coded instruction.

Neural networks train themselves, and this might appear to remove the possibility of human error. But it’s not so simple, explains Lorenzo Strigini, director of the Centre for Software Reliability at City University of London. One reason is that neural networks are implemented and trained by software – software that will almost certainly contain bugs, bugs that will almost certainly affect what your neural network learns and how it behaves. If you have a bug in the code that executes machine learning Koopman reckons “you’re finished.”

In theory, the more miles autonomous cars clock up, the more data they will have to learn by, and the safer they will be. However, there is a “long-tail” of “stuff that almost never happens” that can nonetheless be “pretty thick”, simply because there are a lot of potential very rare events, Koopman warns. Therefore, which training data engineers decide to feed neural networks is “super safety-critical,” he thinks.

Something else that worries safety experts is that it’s impossible to verify the reliability of neural networks in the same way as you would in more traditional hand-coded systems, such as you might find in an aircraft for example. You check neural network driven cars are safe by observing their behavior – as with a human driver – but we cannot yet make sense of the numbers passing through them that produce their decisions.

In testing, an autonomous car might avoid cyclists perfectly. But in reality, the system may only have been avoiding cyclists wearing helmets, or with a certain sized bike, explains Koopman, a mis-categorisation that could have disastrous consequences on real roads. The problem is that “we don’t know why it thinks it’s a bike,” he says.

This opacity causes two problems when bugs arise. If an autonomous car is bamboozled by a weird event like a tsunami, you can retrain it by feeding it example tsunami scenarios so it can cope better next time, says Hollander. But after this retraining: “You don’t know whether you screwed up something else,” he warns. Hence manufacturers still tend to put a non-neural network “safety wrapper” around them. The second problem is that if an autonomous car crashes, it may be very difficult for the manufacturer to explain why it made the mistake, according to Koopman.

Ben Peters, vice president of product and marketing at FiveAI, a self-driving car software company, believes the solution is to modularise neural networks with each network having a particular specialization. So, for example, one might recognise street signs and the other the edges of lanes. This would potentially make it easier to track down the source of a problem in the event of a crash and to then retrain that particular neural network. Peters calls those who use one neural network for lots of different tasks – steering, acceleration and braking – “bonkers”, as all-in-one systems cannot be easily corrected.

The black-box nature of neural networks has led to a push in computer science towards explainable AI. Trevor Darrell, an artificial intelligence expert at the University of California, Berkeley, is working the US Defense Advanced Research Projects Agency (DARPA) explainable AI initiative kick-started a year ago, and has recently helped come up with an image identification system [PDF] that helps a system explain why it thinks, for example, a photo is of, say, a baseball (answer: “the player is swinging a bat”). Darrell is optimistic – more than many, who fear that some perception tasks may be “inherently unexplainable,” he says.

It’s been said the software industry averages between 15 and 50 bugs per 1,000 lines of code. That wasn’t a problem when the code ran on your PC – Windows has had between 40 and 60 million over the last few years. If your app or your PC crashed, no harm done to life or limb. Cars, on the other hand, have typically 200 million lines of code – so the software carries a conservative three million bugs.

Manufacturers acknowledge they can’t eliminate bugs entirely, but hope that even so, autonomous cars will be safe – not just for those on board but others on the road, too.

With manufacturers talking of delivering autonomous vehicles in large number from 2020‑2021, let’s just hope they find a way to squash the bugs between now and then. ®

More about

TIP US OFF

Send us news


Other stories you might like