This article is more than 1 year old

Voice assistants failed because they serve their makers more than they help users

Siri? Google? Alexa? Could ChatGPT save us from this data harvesting nightmare?

Opinion We were promised an age of wonders. By 2022 we'd have self-driving cars, robo-maids, even voice-activated "friends" – digital companions to keep us well-informed. What went wrong?

We know that the physical world, filled with exceptions and one-off events, continuously confounds even the most able humans. Expecting more out of a piece of software seems – from 2022's jaded point of view - a tragic example of believing your own hype. Robo-maids and self-driving vehicles fall over when they encounter the real world. Anything else would be either blind luck or black magic.

We had every reason to expect better outcomes in the purely symbolic worlds of information, knowledge and communication. The big three personal voice assistants – Amazon Alexa, Google Assistant and Siri – have been with us for the better part of a decade, trained by billions (possibly trillions) of tiny interactions, each of them improving language models that must now be the equal of anything developed anywhere.

Despite this, voice assistants have proven almost completely pointless. Amazon recently admitted to losing billions of dollars a year on Alexa hardware that promises much and delivers disappointingly little. Ditto Google. Apple has risked less on Siri, but even its ambitions for a HomePod speaker – with embedded Siri – fell far short of its imagined potential.

Ask anyone how they use these assistants for (everyone with a smartphone has access to at least one of them) and the answer almost invariably boils down to one of two tasks: playing music, or setting a timer.

While useful, these activities are such trivial uses of voice assistants you have to wonder why anyone ever bothered – and, in the face of such overwhelming indifference, why vendors continue to throw billions at their development.

Why haven't all voice assistants joined the vast graveyard of "tried it, didn't quite work out" technologies, like pen computing, the semantic web, or blockchain?

The answer is simple: voice assistants were never designed to serve user needs. The users of voice assistants aren't its customers – they're the product.

Back at the 2019 Consumer Electronics Show, I took a brief tour through a pavilion featuring Amazon's "Alexa Home" – brief, because with all those devices listening, waiting for a word on which to pounce, the pavilion felt more like a holding cell at the Ministry of Love than the home of the future. All of those microphones were waiting for the opportunity to harvest data – on activity or need or desire – so that this information could be on sold, analyzed, translated into some form of near-subliminal nudges, then fed back to the user. The whole system had been carefully thought out as a closed loop of feedbacks, driving users into patterns of consumption designed to benefit Amazon.

Even though most users had no conscious awareness of this architecture, they somehow instinctively rejected it. No one uses voice assistants as their makers had intended – everyone engages them at the bare minimum that provides some level of automation, without surrendering agency. How we did this – collectively and unconsciously – must puzzle the makers of these gadgets, who obviously believed we'd be wide open to the wonder of AI.

They believed this because those assistants had been designed as mimics of the Knowledge Navigator – the Ur-demo of personal digital assistants, dreamed up by Apple's research team thirty years before it became possible. Using a conversational interface, the Knowledge Navigator helped its users find what they needed to find, learn what they needed to learn, and do what they needed to do. It gave them added agency in every situation.

That dream did not survive the rise of the internet as an advertising-revenue-driven medium. Agency lost out to the agencies, learning gave way to profiling, and finding became restricted to commercially advantageous search results. Cut to size on the Procrustean Bed of Late Capitalism, the heirs to the Knowledge Navigator promised much – but could only ever serve their masters, not their users.

Playing with the exciting and terrifying ChatGPT, I've recovered my sense of what a Knowledge Navigator should be like. It's friendly, knowledgeable, and works toward my own ends. Yet someone has already pointed out that, because ChatGPT has been released for free, it's free to harvest all of our interactions: learning, refining its models, and – perhaps – creating the successor generation to these failed voice assistants.

Something smart enough to be engaging and friendly and helpful, while at the same time profiling, analyzing and nudging – that may be our future. Will we instinctively resist these new offerings as they serve us up to their masters? ®

More about

TIP US OFF

Send us news


Other stories you might like