You Look Like a Thing and I Love You: A quirky investigation into why AI does not always work

Flaws are 'far beyond merely inconvenient', writes Janelle Shane


Book review Everyday AI has the approximate intelligence of an earthworm, according to Janelle Shane, a research scientist at the University of Colorado but better known as an AI blogger.

Since AI is both complicated and massively hyped, and therefore widely misunderstood, her new book is a useful corrective.

You Look Like a Thing and I love You is both funny and annoying. It is based partly on content from the author's AI Weirdness blog, where she recounts what happens when you use artificial intelligence for unusual purposes. Examples include creating chat-up lines (one result being the title of the book), writing recipes, telling jokes, sorting tasty sandwiches from disgusting ones, or creating robots for crowd control.

What these examples drive home is that AI has no real understanding of what it is doing. Shane could get the AI to come up with some quite convincing-looking recipes, for example, from a model trained from thousands of real ones. If you look more closely though, you see that all the recipes are nonsense, because the AI has no idea what makes for a tasty dish, or even how to cook. It is just generating text from patterns it has learned.

You Look like a Thing and I love you by Janelle Shane

Shane uses these examples and others drawn from the history of AI to explain some basics about how it learns and how it comes up with its predictions or imitations. AI is different from algorithmic programming in that it learns by example. Want image recognition? Supply a large database of images categorised according to what you want the AI to recognise, train the model, and the AI will work out its own rules for categorising new images.

confused

AI image recognition systems can be tricked by copying and pasting random objects

READ MORE

This approach has magical power in the right circumstances, but it is also problematic. The quality of the result is only as good as the quality of the dataset. The AI may develop faulty rules. For example, it might only recognise sheep if they are on grassy backgrounds, so whereas a human could easily spot a sheep in a living room, the AI might not.

In general, AI is bad at spotting unusual things. This has consequences for many use cases. Autonomous vehicles work fine most of the time, for example, but are also capable of catastrophic errors if something unexpected occurs. "It's inevitable that something will occur that an AI never saw in training," says Shane. At this point the AI may be smart enough to hand control to a human, but "humans are very, very bad at being alert after boring hours of idly watching the road," she writes.

Bias in AI is another issue. "Responsible AI", for example, was the title of a press session at the Google Next event in London this week. Everyone agrees on the importance of avoiding bias. Read Shane's book, though, and you will conclude that it is all but impossible.

AI inherits the bias of the data it is given, and if it comes from humans, it will not be neutral. Amazon, we are told, gave up on AI for identifying promising job applications because it could not eradicate gender bias, among other things. Simply removing gender information was insufficient as the AI used other clues to prefer male applicants – because they were preferred in the data on which it was trained. Huge effort is expended to work around problems like this, but it is difficult – made worse by the fact that working out exactly how an AI process has reached its conclusions can itself be a challenge.

One of Shane's points early in the book is that AI only works when it is specialised. You can teach it to play chess or identify images, but general intelligence in the style of a sci-fi robot like C-3PO is way beyond today's AI. How smart is AI? Maybe as smart as an earthworm, she says, for everyday examples, or as a honeybee for the most powerful neural networks. Human-like general intelligence is a long way off.

This is not a technical book but it does explain the essentials of topics including neural networks, training models, Markov chains and general adversarial networks. It is a good title to give someone who thinks AI will solve all our problems. Not that Shane is gloomy about AI; it is obvious that she loves what it does. There are some real dangers, though. "As more of our daily lives are governed by algorithms, the quirks of AI are beginning to have consequences far beyond the merely inconvenient," she writes.

Conclusion? "There is every reason to be optimistic about AI, and every reason to be cautious," she remarks. The problem, however, is not that AI is too smart, but that it is not smart enough. Trust it too much and... well, you have been warned.

You Look Like a Thing and I Love You is published by Wildfire, ISBN 9781472268990. ®

Similar topics

Broader topics


Other stories you might like

  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading
  • Big Tech loves talking up privacy – while trying to kill privacy legislation
    Study claims Amazon, Apple, Google, Meta, Microsoft work to derail data rules

    Amazon, Apple, Google, Meta, and Microsoft often support privacy in public statements, but behind the scenes they've been working through some common organizations to weaken or kill privacy legislation in US states.

    That's according to a report this week from news non-profit The Markup, which said the corporations hire lobbyists from the same few groups and law firms to defang or drown state privacy bills.

    The report examined 31 states when state legislatures were considering privacy legislation and identified 445 lobbyists and lobbying firms working on behalf of Amazon, Apple, Google, Meta, and Microsoft, along with industry groups like TechNet and the State Privacy and Security Coalition.

    Continue reading
  • SEC probes Musk for not properly disclosing Twitter stake
    Meanwhile, social network's board rejects resignation of one its directors

    America's financial watchdog is investigating whether Elon Musk adequately disclosed his purchase of Twitter shares last month, just as his bid to take over the social media company hangs in the balance. 

    A letter [PDF] from the SEC addressed to the tech billionaire said he "[did] not appear" to have filed the proper form detailing his 9.2 percent stake in Twitter "required 10 days from the date of acquisition," and asked him to provide more information. Musk's shares made him one of Twitter's largest shareholders. The letter is dated April 4, and was shared this week by the regulator.

    Musk quickly moved to try and buy the whole company outright in a deal initially worth over $44 billion. Musk sold a chunk of his shares in Tesla worth $8.4 billion and bagged another $7.14 billion from investors to help finance the $21 billion he promised to put forward for the deal. The remaining $25.5 billion bill was secured via debt financing by Morgan Stanley, Bank of America, Barclays, and others. But the takeover is not going smoothly.

    Continue reading

Biting the hand that feeds IT © 1998–2022