El Reg was invited to the House of Lords to burst the AI-pocalypse bubble

Notes from an Oral Evidence session


Comment To Westminster, England, where the House of Lords is conducting a wide-ranging inquiry into artificial intelligence.

I'm writing a book on AI – and why people believe in it – and on the basis of some notes, was invited to give oral evidence on a panel of three alongside the BBC's Rory Cellan-Jones and The Financial Times' Sarah O'Connor.

AI is so hot right now, Parliament TV decided to show it live. Here's how it went.

When Nick Bostrom is the biggest AI sceptic in the room

The inquiry's first panel preceded mine, and starred Oxford University Philosopher Professor Nick Bostrom, head of computer science Professor Mike Wooldridge, and Dame Wendy Hall, DBE, FRS, FREng. Bostrom is the posthuman thinker instrumental in promoting the notion of an AI jobs apocalypse, but is best known for promoting the idea that reality is a computer game simulation. At a couple of points over the next 45 minutes I did wonder if maybe he had a point.

To my surprise, though, Bostrom was the most grounded and realistic about the prospects of AI. No, he said, general intelligence is nowhere near close and not even worth talking about. Be cautious about betting on big leaps in automation. By contrast, Wooldridge and Hall bubbled with enthusiasm like teenagers on nitrous oxide. AI is amazing! More money, please!

When academics ask for more resources, it's coded. "We need more capacity," said Wooldridge.

I have some sympathy for an AI veteran like Professor Wooldridge – the AI winters have been so very long and cold, and the summers so very short, that you can't blame him for running around like a squirrel on an autumn day, collecting as many nuts as he can. I learned he's writing two children's books on machine learning. That's two nuts at least.

However, if Wooldridge harboured any doubts about the epoch-changing potential of machine learning (ML) – doubts which have come to the surface this year – he kept them to himself. My job, I thought at the start, was tricky: there are some useful new tools and applications with ML, but a whole lot of unrealistic expectations, too. The conversations that take place in the posh papers, and at think tanks, about AI are quite divorced from reality. They simply assume that a wowzer demo leads to amazing robots.

robot

Calm down, Elon. Deep learning won't make AI generally intelligent

READ MORE

Down in the trenches, though, both robotics people and machine-learning practitioners are fairly level-headed. It's the higher up you go that the grander, unsupportable claims are made. Lord Giddens mentioned Wittgenstein and past failings of symbolic AI: the committee was not going be a pushover.

Hall and Bostrom had just made a hard job a bit harder.

Moravec's paradox is still a hurdle

Rory Cellan-Jones said his inbox was overflowing with AI material, and read out some samples. We must be at the top of the hype cycle, he said. O'Connor agreed.

So how to start?

The most useful thing I've taken away is that Moravec's paradox hasn't gone away.

A typical account of AI in the mainstream media goes like this. Here's an impressive image recognition demonstration. Something something self-driving car. Ergo, apocalypse! Fourth industrial revolution! The idea that the same trick deployed in the deep-learning demonstration can lead to a quantum leap (rather than incremental change) in robotics is a given.

The way the robotics and machine-learning communities get along – or quite evidently don't – suggests otherwise. A "breakthrough" in machine learning or deep learning isn't necessarily going to translate to smarter robots. It might and it might not. The laborious labelling and training is of absolutely no use to prevent a car hitting an old lady crossing the road, for example.

So outside of games, or nice clean-room environments, it's hard to find evidence that justifies fears of an imminent, society-changing leap in the capabilities of machines. The world is messy and generates messy data. How could I explain all this?

So I did my best to try and explain the paradox. Doing well at adult level games doesn't count, I said. Progress in robotics may be equally illusionary: I'm not aware of a "self-driving truck" that can park yet. Since "autonomous" vehicles aren't really autonomous (they can tootle along happily for many miles on a perfect day on a clearly marked road) then we should more correctly call them "semi-autonomous". But, mind you, lane assist makes today's cars "semi-autonomous" too, to a degree.

At the mention of games, I sensed a little unease. The committee has already met Google's Demis Hassabis and seen the movie AlphaGo. Hassabis is on an upward trajectory that has already taken him soaring past "National Treasure" status – he's rapidly approaching secular Sainthood. Even when Hassabis says something ludicrous – his Atari-playing AI showing general intelligence, for example – nobody picks him up on it. Nobody seems to have noticed that DeepMind's medical application, Streams, has absolutely nothing to do with AI – but did succeed in Google receiving confidential personal health data.

Perhaps a full-frontal assault was not wise.

The Right Reverend Robert Runcie

About two-thirds of the way in, the committee asked what risks are associated with AI.

This was a goldfish moment – I'd answered that one a few moments earlier, by not answering a previous question.

I then realised I was in The Two Ronnies' greatest sketch. This one:

Youtube Video

Smart software but slow hardware?

For a moment I thought Cellan-Jones and I would reopen our famous row about teaching kids to code. The committee wondered if our children were aware or prepared for the coming AI revolution. I said I was disappointed that so much time was now devoted to algorithms (it's weekly at my kids' primary) that history and art have been cut back and only make spasmodic appearances in the timetable. How can we make sense of the world – of something like China's relationship to Korea – without the culture and the history? This went down well.

But we both veered away from the topic – just in time. Then Rory said something interesting. He thought the problem today was that software was clever and hardware was slow.

I had such difficulty getting my head around this, I quite forgot where I was. The problem with AI today is of course quite the reverse.

As Noam Chomsky points out, we've stopped doing "avant-garde" basic research and now rely on the blunt force of computing power to drag us through. I'll drop Chomsky's description – he uses the example of weather prediction – right here:

Suppose that somebody says he wants to eliminate the Physics Department and this is the right way to do something. The right way is to take endless numbers of video tapes of what's happening outside the window, and feed them into the biggest and fastest computer you have, gigabytes of data, and do complex statistical analysis, Bayesian this and that.

You'll get some kind of prediction about what will happen next. In fact you get a much better prediction than what the Physics Department will give you. If success is defined as getting a fair approximation from a massive, chaotic system and unanalysed data then it's way better to do it this way than to do it the way physicists do.

[But] you won't get the kind of understanding science has aimed at. What you'll get is an approximation to what's happening. That's not what meteorologists do. They want to understand how that's working. They're two different measures of what success is.

When the Lords have been told by people with great authority that everything is on the right track, how can you introduce this idea? I'm not sure I succeeded.

Into Lenin's dustbin of history you go

At several points witnesses had raised the importance of data – the individual owning their data was something Dame Hall supported. So did I. The individual should be sovereign.

Matt Ridley, the 5th Viscount Ridley, asked me if this wasn't like a householder objecting to these newfangled aeroplanes flying over their house and demanding property rights over their personal airspace. It's an interesting analogy.

I hope not, I said, because if you follow that logic to its conclusion, that data ownership must be surrendered for the common good, then it leads to a kind of totalitarianism. Pop open a pint after work and some algorithm somewhere will flag you as a higher insurance risk (you naughty drinker). This is nicely satirised in Dave Eggers' The Circle: "Sharing is Caring." If you turn off Big Brother, you are stigmatised.

Marking your own homework

The Lords then asked how well the media was covering developments in artificial intelligence. Both the BBC and FT thought, on the whole, pretty well.

You know, there are moments in life when you want to stick a fork through your head, then you realise you are on live TV. This was one of them. There wasn't a fork handy.

The spectacle of journalists marking their own homework and concluding they're doing just great is not one that travels well, I think. Of course I couldn't say this. Instead I suggested that if reality wasn't really being reflected in the media then we had a problem.

The scepticism from AI practitioners is almost entirely absent, as producers and editors debate the robot apocalypse. Now that Geoffrey Hinton – the father of deep learning – has recently recanted, causing astonishment in the ML community, maybe we could talk about different approaches? There is a genuine fear that the one-trick pony can't be asked to perform to again. Hinton says it won't.

Terrified robots will take middle class jobs? Look in a mirror

READ MORE

This is a genuine concern. In a couple of years, when the dogs have barked and the carnival has moved on, we won't be any closer to building machines that make sense of the messy world.

What next for AI?

With this in mind, when asked for one recommendation, I suggested "Red Teams". Find people who find flaws in an approach, and support researchers who have interesting new approaches that aren't popular or fashionable. Encourage intellectual diversity.

AI sure needs it, as the state of research is fairly pitiful and only gives us minor quantitative improvements. Now that Hinton himself has legitimised ripping up the metaphorical textbook – one he wrote – perhaps others will, too. ®

Bootnotes

Bostrom commissioned the famous (or notorious) 2013 study on employment by Osborne and Frey. Osborne and Frey listed job categories susceptible to "computerisation", which the media then treated as a prediction. This really kicked off the whole "Robots Will Take Your Middle-Class Job" panic in the media.

I described this as a form of "cheque kiting". To support their case that computers are better at cognitive tasks, Osborne and Frey make ten references to Brynjolffson and McAfee's Second Machine Age (2011). In turn, Brynjolffson and McAfee now treat Osborne and Frey as a hard prediction. It would have been better to compare this to "carousel fraud" – or better still, carousel propaganda.

To their credit, Cellan-Jones mentioned that the OECD had questioned Osborne and Frey's work, and Giddens pointed out that it was merely descriptive, not predictive. Whatever you call it, there are insufficient funds in the account for the cheque to clear.

Similar topics


Other stories you might like

  • Prisons transcribe private phone calls with inmates using speech-to-text AI

    Plus: A drug designed by machine learning algorithms to treat liver disease reaches human clinical trials and more

    In brief Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

    A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners’ phone calls.

    In a sales pitch, LEO’s CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

    Continue reading
  • Battlefield 2042: Please don't be the death knell of the franchise, please don't be the death knell of the franchise

    Another terrible launch, but DICE is already working on improvements

    The RPG Greetings, traveller, and welcome back to The Register Plays Games, our monthly gaming column. Since the last edition on New World, we hit level cap and the "endgame". Around this time, item duping exploits became rife and every attempt Amazon Games made to fix it just broke something else. The post-level 60 "watermark" system for gear drops is also infuriating and tedious, but not something we were able to address in the column. So bear these things in mind if you were ever tempted. On that note, it's time to look at another newly released shit show – Battlefield 2042.

    I wanted to love Battlefield 2042, I really did. After the bum note of the first-person shooter (FPS) franchise's return to Second World War theatres with Battlefield V (2018), I stupidly assumed the next entry from EA-owned Swedish developer DICE would be a return to form. I was wrong.

    The multiplayer military FPS market is dominated by two forces: Activision's Call of Duty (COD) series and EA's Battlefield. Fans of each franchise are loyal to the point of zealotry with little crossover between player bases. Here's where I stand: COD jumped the shark with Modern Warfare 2 in 2009. It's flip-flopped from WW2 to present-day combat and back again, tried sci-fi, and even the Battle Royale trend with the free-to-play Call of Duty: Warzone (2020), which has been thoroughly ruined by hackers and developer inaction.

    Continue reading
  • American diplomats' iPhones reportedly compromised by NSO Group intrusion software

    Reuters claims nine State Department employees outside the US had their devices hacked

    The Apple iPhones of at least nine US State Department officials were compromised by an unidentified entity using NSO Group's Pegasus spyware, according to a report published Friday by Reuters.

    NSO Group in an email to The Register said it has blocked an unnamed customers' access to its system upon receiving an inquiry about the incident but has yet to confirm whether its software was involved.

    "Once the inquiry was received, and before any investigation under our compliance policy, we have decided to immediately terminate relevant customers’ access to the system, due to the severity of the allegations," an NSO spokesperson told The Register in an email. "To this point, we haven’t received any information nor the phone numbers, nor any indication that NSO’s tools were used in this case."

    Continue reading

Biting the hand that feeds IT © 1998–2021