This article is more than 1 year old

The sooner AI stops trying to mimic human intelligence, the better – as there isn't any

Still waiting for neuroscientists to work out why

Something for the Weekend, Sir? Never again. As Gods are my witnesses, you will never catch me [insert gerund here] in future. I have learnt my lesson.

You won’t catch me because I’ll be more careful next time.

Learning from one’s mistakes is a sign of intelligence, they say. This is why machine-learning is at the heart of artificial intelligence. Give a Billion Dollar Brain enough Big Data to work with and its subroutines will eventually work out the right solutions on its own – "Treat Patient X for early signs of cancer", "Investigate Citizen Y’s tax return", "Kill Harry Palmer", and so on.

That may be how humans like to fantasise how intelligence works but is that really what human intelligence does? Most humans I’ve met are like me: we make the same mistakes over and over, partly because we have other things on our minds, and partly because it’s fun. At the same time, some of humanity’s greatest achievements arise from hunches, guesswork and pure luck rather than from the painstaking evaluation of evidence.

I can’t say I have any personal examples to give of the latter but I have plenty of the former. Here are some things I say I will never do again, but do anyway:

  • Install updates as soon as they’re released
  • Write long emails using my phone while sitting in front of my laptop
  • Watch trash TV after midnight
  • Park in the same spot at the supermarket
  • Order from Amazon
  • Try to grow a beard then shave it off because the itching drives me mental
  • Accept a project with weeks of notice and start work on it the evening before its 10am deadline
  • Put off looming work until the weekend, then waste that weekend pottering about
  • Seek more work, win the contract, then convince myself it’s an imposition
  • Volunteer my services, only to find everyone else is getting paid for theirs
  • Make shit up for a weekly column on an IT news website

It’s not just an inability to learn from mistakes, either. There have been occasions when I simultaneously regret a decision and look forward to doing it again some day. One that springs to mind from personal experience was to take Mme D on a brewery tour followed by tutored beer tasting.

She’s not a drinker, you see. A sip of Mousseux du Magazin-de-Livre on special occasions is her limit; beer she finds utterly foul. So sitting down in front of six chunky tumblers of assorted evil-smelling concoctions brewed on-site was probably not her idea of a thrilling afternoon’s entertainment. We were invited to sniff and quaff each one in turn, and even the sniffing bit was too much. Honestly I don’t blame her: all the beers tasted great but one smelt like sick, another of dog-turds.

So I did what any supportive partner would do: I drank hers as well as mine. You know, to save her any embarrassment.

Photo of half-drunk glasses on a table during a beer tasting at the Shepherd Neame Brewery

Time, gentlemen, please. Oh, are you not having that...?

I remember it was hardly the most successful afternoon-out we’ve spent together and yet I can’t wait to do it again. That said, my memory of the event is a little hazy in the details, such as the names of all the beers or how we got home.

How would a brand spanking new AI evaluate my contradictory decision-making process, I wonder. Well, possibly not as well as an old one. As Silent Eight’s John O’Neill points out, "AI learns from every single case it sees. The longer it’s on the job, the more it learns – and the better it gets."

In this sense, AI is the opposite of conventional software. Instead of waiting for the next much-improved upgrade to be released, the best time to jump on board the AI train is, well, about two years ago. By now, it will already have acquired 24 months’ worth of machine-learning and beat the pants off anything just out of the stocks today.

"A system with two years’ learning under its belt will do the job far better than a new one that has to learn everything from scratch," reckons O’Neill. "It's the same reason that someone who's been with your institution for two years works more efficiently than a newly hired employee. Experience matters."

This is the line I increasingly bleat at commissioning editors. In future, I might even start misquoting O’Neill when he says "AI actually gets more valuable as it ages." Backspace ‘AI’/‘it’, insert ‘Dabbsy’/‘he’, and I’m rolling.

There are well-known and fully acknowledged challenges to machine-learning, of course, with regard to the quality of the datasets it is given to ingest. It has been shown time and time again how unintentional human bias in the recording and qualification of the initial data can sabotage an AI’s ability to arrive at balanced, neutral conclusions.

Can you teach an AI to recognise potential criminals by their physiognomy? Let’s give it all our court files for the last 20 years and see what it comes up with. All you’ll get is a machine that’s learnt to be as unconsciously racially biased in its profiling as the arresting officers and judges delivering the sentences. We’ve got plenty of those already; we don’t need to make the situation worse by having an AI devise an algorithm to automate it.

Last month, The Reg reported on Professor Frank Pasquale’s lecture at Brooklyn Law School postulating how the legal system could be expanded to hold machine-learning companies to account for bias in the AIs they develop.

Good luck with that, Frank.

Perhaps long ago in the past it was possible for citizens to claim damages, or even plain old consumers to get their money back, if a product or service caused harm or was unfit for purpose. Not any more. Just explode a mandatory 50,000-word T&Cs window in front of your software along with two buttons at the bottom – ‘I Accept’ and ‘No. What? Oh All Right Then’ – and you’re off the hook. No matter which button the user taps, they’ve formally and legally accepted the steaming pile of failure purporting to be a working system, and it’ll be all their own fault when it eventually fucks everything up.

Why should it be any different for the AIs now being rolled out into healthcare, finance and public administration?

And the funny thing is that we’ve known this scam for decades. We whinge and complain about it, then click on ‘I Accept’ anyway. It’s as if we can’t learn from our mistakes any better than AIs can, or want to.

"Practice makes perfect" goes the old adage. Not necessarily: I found that if I repeat the same mistakes over and over again, I gradually get better and more efficient at making those mistakes. This is a curious aspect of intelligent human behaviour, and it’s what machine-learning is inadvertently designed to replicate.

So I suppose the saying still works: practice does make perfect. With enough practice, eventually I – along with all the AIs in the world – will become perfectly incompetent.

Youtube Video

Alistair Dabbs
Alistair Dabbs is a freelance technology tart, juggling tech journalism, training and digital publishing. He admits his opinion betrays a pessimistic glass-half-full attitude to the current state of machine-learning. You are free to top up that glass any time you like. "Oh, are you not drinking that? May I…?" More at Autosave is for Wimps and @alidabbs.

More about

TIP US OFF

Send us news


Other stories you might like