This article is more than 1 year old

KFC: Enemy of waistlines, AI, arteries and logistics software

Self-driving cars mistake the Colonel for a Stop sign, which is cruel given a software SNAFU's emptied UK eateries

Brits suffering through the nationwide KFC famine can enjoy with wry amusement the fact that an AI can be fooled into thinking an image of Colonel Sanders and the restaurant's logo are a stop sign.

The fast food famine arose after KFC UK last week switched from logistics provider Bidvest to rival DHL. The result was delivery delays that left hundreds of restaurants devoid of chicken.

KFC blamed "teething problems" for its supplies becoming as rare as hen's teeth. Other reports suggest software issues contributed to the chickens' scratching. KFC at least got a decent gag out of it all in the Tweet below.

Oh, and the AI story … which follows the now-familiar theme of launching adversarial attacks against machine learning and neural networks. This time a group of Princeton University boffins have devised what they call "DARTS: Deceiving Autonomous Cars with Toxic Signs".

As you can see below, one of the many tricks outlined in their paper at arXiv is to fiddle with a KFC sign just enough that the AI they attacked thought it was a Stop sign (notwithstanding the futility of stopping at a KFC sign at the moment, see above).

As they note: "Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to even a large-scale interruption of transportation services relying on autonomous cars."

Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang and Prateek Mittal say their tests covered:

  • Sign embedding - a traffic sign is perturbed so the AI interprets it as a different traffic sign;
  • Adversarial traffic sign (the most generic of attacks); and
  • Lenticular printing - where what's seen by the AI (and by a human) depends on viewing angle, so the Macca's sign in the image above is a stop sign seen from the right angle.

Clearly the dangers are real: if a smudged 120-speed sign (which in the real world might not even be a malicious adversary, just weathering) can look like a 30-speed sign, an autonomous car might slow down dramatically on a freeway, and the results would not be comical.

In the true spirit of an AI experiment, the Princeton group created a pipeline to generate their attack signs.

First, DARTS acquires the original image and chooses a target class to classify the original into; it then creates the digital version of the adversarial example (within a perturbation budget, which in lay terms means "don't change it too much"); and finally printing out the adversarial example for testing.

They claim a success rate better than 90 percent, even in a black-box setting where the attackers can't see inside the algorithms they're attacking. ®

More about

TIP US OFF

Send us news


Other stories you might like