Mobileye's autonomous cars are heading to California. But they're not going to kill anyone. At least not on purpose

Human CEO outlines safety policy for other humans


Prediction

But he then claims that this means there is no need for an autonomous car to predict what other cars are going to do. "You don't need to forecast what other vehicles will do," he states boldly.

Which is not only a bit of a leap in logic, but it ignores the fact that autonomous cars are constantly predicting and forecasting what other cars are going to do, most clearly when they ascertain the speed at which cars around them are going. You can even see little velocity lines drawn on each car in Mobileye's software.

It also goes directly against a presentation given by Intel Labs later on in the conference where a good chunk of the session is spent discussing "probabilistic computing" where computers take sensory data and try to figure out what that means for future events.

Waymo self-driving minivan

Waymo van prang, self-driving cars still suck, AI research jobs, and more

READ MORE

And the example given? A ball bouncing along the street and crossing the path of a self-driving car. With probabilistic computing, a system should be able to see that ball and predict there is a good chance a child will soon emerge to run after the ball – potentially putting a human in harm's way. That kind of advanced thinking is the future.

But not according to Shashua. Instead, he dismisses this "generative approach" of predicting what others will do saying it would simply create too much information to effectively compute and argues instead that autonomous cars must use a "discriminative approach" that focuses only on a specific goal such as "put the car in that space" or "get ahead of that vehicle."

And then, in an apparent effort to prove that all of his previous arguments are correct, we get the goods: a video of one of Mobileye's autonomous vehicles at that difficult junction in Jerusalem very cleverly navigating other cars that are acting aggressively and assertively putting itself in the left-hand lane all the way from the right-hand lane.

Terrific.

Lane hogging

Except what some of us in the audience saw was some worrying car behavior while it did that: namely, that in getting over to the left-hand lane from the middle-lane, Mobileye's car for several seconds effectively hogged both lanes, running to the far left of the middle lane and making it difficult for any car to get past on its left.

Shashua was very pleased with this behavior and said it clearly indicated the car's intention to other road users. The very first question asked after the presentation was over took issue with this, albeit in a gentle way.

"How do you account for different driving styles in different cities?" asked an attendees, noting that driving in Boston is very different to driving in another city, even a city on the East coast of the US, let alone one on the other side of the world.

We had had the exact same thought. If you tried to drive the way the car had driven in Jerusalem in Los Angeles, for example, you would immediately anger other road users. They would more than likely respond aggressively – cutting down the left or running up against your bumper. (And that's not forgetting LA's peculiar habit of driving in your blind spot.)

As such, a Mobileye autonomous car could actually cause accidents while being blissfully unaware of it. And the more of them moving around a city acting like Jerusalem drivers, the greater the chances of an accident.

To our surprise, rather than discuss how an autonomous car will learn quickly from the reactions of other drivers and so arrive at a culturally appropriate way of driving, Shashua simply dismissed the question.

"That's not a big issue," he responded abruptly. "Response time will change from region to region. And assertive driving is a spectrum that can change." He moved onto the next question.

Crash

He was then asked another critical question: what will a Mobileye car do if it senses that an accident is inevitable? How will it decide what course of action to take – will it crash itself to avoid an accident? Will it strike another car to avoid a pedestrian?

Again, rather than respond thoughtfully, Shashua was dismissive and appeared not to hear, or to choose to ignore, the main thrust of the question.

"We call that an 'ethical dilemma'," he responded. "RSS removes any ethical dilemmas."

By RSS, he is referring to Mobileye's own "Responsibility-Sensitive Safety (RSS)" which it describes as a "mathematical model for autonomous vehicle safety" and is something that Mobileye thinks should become a global approach for how to consider autonomous cars, safety and, by extension, accidents.

According to Shashua, under that model, one of his cars can take any action so long as it does not cause an accident. Everything else is "very subjective."

It's at this point that we started getting worried. Because at the heart of every big policy mistake is the initial mindset that makes it happen.

Take Mark Zuckerberg and Facebook. The company has absolutely no consideration for other people's privacy and personal data. Every time they are called out on what society sees as an abuse of its position, Facebook responds by giving the illusion of additional privacy controls.

But Facebook isn't going to change its approach until there is a cultural shift in thinking. And that isn't going to happen until or unless Zuckerberg fundamentally changes his own view on the matter.

The Zuckerberg privacy mindset (this was the guy that downloaded women's pictures from Harvard websites and placed two next to one another to decide which was hotter) has driven the business which has driven the policy – which is to gather as much data on people as possible and find out ways to sell it – all of it.

Facebook for cars

Except rather watching than the car crash of Facebook and user privacy, with Mobileye we could be looking at literal car crashes as autonomous vehicles cause accidents while the company triumphantly exclaims "not our fault! We were following our RSS protocol!"

After the presentation and Q&A, Shashua gave a roundtable session with just media, which we attended and during which our niggling doubts grew into full-blown fears.

He talked at length about how there needed to be a standardized safety model for the autonomous vehicle industry – and how his RSS approach was the right one. What's more, because Mobileye was now owned by Intel, it meant that it now had the weight to push this safety model to the industry and to global regulators.

Asked about the high profile accidents experienced by Tesla and Uber, his response was to argue that it was necessary to "define in advance what is dangerous" and then use sensor data to "prove there was no sensing mistake." The logic being that so long as the sensors worked, and the car did what it was programmed to do, then there is no way it could be blamed for an accident.

He again argued that "not causing an accident" is a sufficient level of safety for an autonomous car. Asimov's three laws boiled down to one.

So we asked him about a specific scenario: a Mobileye car comes to a halt on a freeway to avoid causing an accident. But what happens then? By being stationary on a fast-moving road, it risks causing a subsequent accident – what is the timeframe that his RSS model works from? Does a car immediately reset and then start moving?

He gives a rambling answer repeating the same line about not causing an accident. When he has finished, we return to the question pointing out that he hasn't answered the question: what happens after the car has come to a halt and the accident has been avoided?

Hmmm Pt 2

"Ah! But it hasn't caused an accident!" he responds triumphantly, adding that there was now nothing to stop the car from driving away. We follow-up: what if there was an accident – and the car had collided with another? He's getting irritated at this point and responds quickly and immediately indicates he wants another question: "Then they would have to pull over and trade details."

This happens again and again with other people's questions, with Shashua repeatedly using straw man arguments to avoid answering questions about map accuracy, car-to-car communication (V2V), and the importance of algorithms in deciding a car's behavior.

As just one example, when asked about the accuracy of Mobileye's current maps and his newly stated desire to use crowd-sourcing from Mobileye's cameras on other vehicles to build up better maps, he blew the entire question up, stating that the only way to be truly accurate would be to insert magnets in the lanes of every road – "and who's going to pay for that?"

But of course, that's not what was being asked. What was being asked was the accuracy of Mobileye's maps.

Danger

This typical engineering "I am right" mindset is often what leads to technological breakthroughs but, as has been proven time and time again, it is dangerous when applied to a larger context.

Usually, the inability to listen to and understand other concerns as important and valid is what prevents this type of personality from reaching a chief executive position. But things are different when an engineer is the founder of a company.

It wouldn't matter if Mobileye was focused solely on the improvement and production of its cameras and related software – because the decisions about safety and the autonomous car protocols that define its behavior would be developed by a third party. But Mobileye is running its own fleet of autonomous cars and hopes to put them into the public marketplace in 2021.

And then, just as we were thinking we were glad we didn't live in Jerusalem, Amnon Shashua reveals that the company will start testing its vehicles in California – next month. And a big chunk of the autonomous car companies in California choose San Francisco as one of their main test areas because of its complexity.

Now we don't fit on the alarmist side of the equation when it comes to autonomous cars. At some point, we have seen every one of them drive past while standing on the side of Geary Street. But based on the presentation this week, and the mindset of the guy deciding on its crucial safety policies, if we see a Mobileye autonomous vehicle, we are going to stay well clear of it. And we would advise you to as well.

One thing we are sure of: if it does get into an accident, the company will be absolutely certain that it wasn't its fault. ®


Biting the hand that feeds IT © 1998–2021