This article is more than 1 year old

While truly self-driving cars are surely just around the corner, for now here's an AI early-warning system for your semi-autonomous ride

Hey, we heard you like machine learning. So we put a machine-learning system in your machine-learning system

Self-driving cars could be equipped with a trained early-warning system that alerts the person behind the wheel whenever it realizes it's entering a situation where a human driver has had to take over before.

Today's systems, like Tesla’s inappropriately named Autopilot with “full self-driving capability," rely on software to identify objects and structures in real-time to perform specific driving functions, such as changing lanes or stopping at traffic lights.

It's not a completely autonomous affair, though: drivers have to take control of the car when the software is unable to deal with a situation developing around it. This breakdown in ability is typically because the code controlling the vehicle encounters a scenario it is unfamiliar with or finds confusing. The faster things go south, the faster the human has to react and take over.

While we wait for artificial intelligence to, basically, get better at driving, researchers at the Technical University of Munich (TUM) in Germany have come up with a novel way to generate an early warning for humans that the semi-autonomous car they're in may be about to give up. While at least some self-driving-ish systems alert their driver that they need to take over because things are getting out of hand, the TUM approach hopes to give folks more advance and accurate warning.

Essentially, the boffins made machine-learning software study the scenarios in which humans took over from a self-driving car – either because the person was told to do so by their vehicle or they took the initiative without a prompt – so that the software could learn to recognize tricky situations further in advance. The goal being to generate an early warning that a self-driving car might raise too late or not at all.

“To make vehicles more autonomous, many existing methods study what the cars now understand about traffic and then try to improve the models used by them," said Eckehard Steinbach, a professor at TUM’s department of electrical and computer engineering, this week.

"The big advantage of our technology: we completely ignore what the car thinks. Instead we limit ourselves to the data based on what actually happens and look for patterns.

smart city concept drawing - self-driving cars, wifi hotspots etc - but no people

Semi-autonomous cars sales move up a gear with 3.5 million units leaving forecourts

READ MORE

“In this way, the AI discovers potentially critical situations that models may not be capable of recognizing, or have yet to discover. Our system therefore offers a safety function that knows when and where the cars have weaknesses,” added the prof, who is co-author of a paper describing the tech, which was published in IEEE Transactions on Intelligent Transportation Systems.

Prof Eckehard and his colleagues worked with BMW to collect data to develop their human-centric alert system. Data from sensors and camera footage was collected from months of recordings taken from test drives in BMW cars, he told us.

The goal was to train a recurrent neural network to recognize when the vehicle was driving under similar conditions that were particularly difficult and needed a human to take over. They trained their model on 90 per cent of the data and tested their system on the remaining 10 per cent, or thereabouts.

"In total, we trained on 4,078 unique driving scenarios and we tested the software on another 510 unique driving scenarios,” the professor told us.

This experimental model was thus able to accurately predict potentially dangerous scenarios with 85 percent accuracy up to seven seconds before they occur. Prof Eckehard said they can improve their neutral network’s performance by collecting more data and incorporating more sources from additional types of sensors.

“The more training data is collected, the better the model will be able to generalize, which will reduce the false positive rate. Since more training data is collected without additional cost during more test drives, this is only a matter of time,” he told The Register.

"The software itself can be improved by using more sources of information about the scenario as input. In our ongoing research, we are investigating more such sources like the planned trajectory of the car and additional sensor modalities such as LIDAR or radar." ®

More about

TIP US OFF

Send us news


Other stories you might like