This article is more than 1 year old

Conversational AI tells us what we want to hear – a fib that the Web is reliable and friendly

As Google and Microsoft make us all AI consumers, remember that to err is human, but to really mess things up at scale takes a computer

Opinion There is an old saying: when giants fight, it is the grass that suffers. For us little people watching, there is little to do but run for cover and grab the popcorn.

For two decades, Microsoft and Google have regarded one another as fundamentally illegitimate. Microsoft has never really recovered from losing the battle for search, nor the failure of Windows Mobile. Google had aspirations to own a universal operating system, but has been curiously unable to leverage the global dominance of Android beyond mobile. Their battle continues to rage on multiple fronts: Bing vs Google Search, Azure vs Google Cloud, and on it goes.

And in an unedifying spectacle of we-got-here-before-you-no-you-didn't, the two giants of computing have lately been trying to undermine each other's efforts to launch 'conversational' search products, based on Large Language Models (LLM). Google has spent years refining LaMDA – last year it even dismissed an employee who had convinced himself LaMDA possessed sentience – while Microsoft has been feeding and watering OpenAI and its multi-generational Generative Pre-trained Transformer (GPT).

Google likely has more AI PhDs working for it than every other business, combined. During the middle years of the last decade it effectively denuded postgraduate programs in AI throughout the world by hiring entire class cohorts, tasking them with improving the quality of the firm's search results.

With that kind of brainpower, Google should be the undisputed leader in public-facing AI applications. But of course Google does only a couple of things really well: search, and ad targeting. Both of those need lots of smarts, but they're well-hidden from the eyes of actual people. As far as Joe Citizen can perceive, those enormous efforts in AI have been entirely frittered away.

That became obvious about a half-hour after OpenAI released ChatGPT – its conversational, contextually aware LLM. Almost instinctively, anyone interacting with ChatGPT asked themselves “Why can’t I use this for search?" Its interface seems natural, discursive, friendly, and thoroughly human – precisely the opposite of an ugly page of search results liberally salted with ads and trackers and all that other crap Google finds necessary to insert in order to keep its margins high.

Microsoft immediately saw ChatGPT as the weapon it needed to destabilize its competitor. Redmond quickly inked a multi-billion-dollar investment deal with OpenAI, and guaranteed that ChatGPT would be integrated across the entire suite of Microsoft products. That means not just Bing, but Office, Github, and – very likely – Windows.

Around this time, Google went "code red" – whatever that means. It brought Larry and Sergey back on deck, and did whatever it could, as quickly as it could, to integrate its existing wealth of LLMs into the flagship search product.

But it's looking late in the day for regrets.

Last week, Google announced a special event on February 8 to reveal its work on AI. Not long after that, a screenshot of ChatGPT integrated into Bing leaked online. Then on Monday February 6, Alphabet CEO Sundar Pichai announced Bard – Google's first-generation attempt to integrate LaMDA into its search engine. Microsoft quickly put together a competing event (on February 7, natch) where it revealed its progress integrating ChatGPT into Bing – confirming that leaked screenshot as real.

This may not be the best way to approach a technology as powerful – and as fraught – as conversational, contextually aware LLM AIs. One giant stomps, the other stomps back – and it's the grass that suffers.

Over the last few months perhaps a hundred million people have had a play with ChatGPT, marveling at its power … and its shortcomings. This "stochastic parrot" (a damning but accurate technical assessment) doesn't understand anything, but only spits out what it rates as most likely to follow on from what has already been communicated. It's helpful and it's interesting – but it's not understanding. And that lack of depth means it has absolutely no common sense.

There's another old saying: "To err is human, but to really mess things up at scale takes a computer."

Both Google and Microsoft have promised that their LLM-flavored search tools will make it clear to users that these results are not to be relied upon. But these same two firms have spent decades, and countless billions of dollars, telling us that computers are our trustworthy companions – they never forget, never make a mistake, and provide access to the wealth of human knowledge.

After all of that indoctrination, we have little choice but to trust whatever ChatGPT or LaMDA say to us. To do otherwise means ignoring everything we've heard across two generations about what computers promise.

Microsoft and Google have both upped their game, using new weapons that no one fully understands – including themselves. Is that wise? Is it even safe? ChatGPT would probably say yes, but it has a vested interest.

Conversational AIs excel at telling us exactly what we want to hear. Google and Microsoft have decided that for their survival into the next generation of computing we users must be surrounded by synthetic con artists, continually confusing fact and fiction so subtly and so thoroughly that truth becomes lost in noise and nearly unknowable. Pass the popcorn. ®

More about

TIP US OFF

Send us news


Other stories you might like