Facebook scales back AI flagship after chatbots hit 70% f-AI-lure rate

'The limitations of automation'


So it begins.

Facebook has scaled back its ambitions and refocused its application of "artificial intelligence" after its AI bots hit a 70 per cent failure rate. Facebook unveiled a bot API for its Messenger IM service at its developer conference last April. Facebook CEO Mark Zuckerberg had high hopes.

TenCent's WeChat was the model. Although WeChat began life as an instant-messaging client, it rapidly evolved into a major platform for e-commerce and transactions in China. But it largely keeps any AI guesswork away from real users.

With Facebook's bot API, Zuckerberg had joined a "chatbot arms race" with Microsoft CEO Satya Nadella. For Nadella, chatbots were "Conversations as a Platform," or even the "third run-time" – as important to humanity as the operating system or the web browser.

Some experts fretted that if China opened up a lead in AI, the West would be doomed to lose World War 3. Others suggested that whichever superpower lost the AI arms race would relapse into a state of primitive technology feudalism.

However, as we reminded you recently, the reality of "artificial intelligence" is far from impressive, once it's made to perform outside carefully stage managed and narrow demos. As stage one, Facebook's AI would parse the conversation and insert relevant external links into Messenger conversations. So how has the experiment fared?

In tests, Silicon Valley blog The Information reports, the technology "could fulfil only about 30 per cent of requests without human agents." And that wasn't the only problem. "The bots built by outside developers had issues: the technology to understand human requests wasn't developed enough. Usage was disappointing," we're told. Now it's simply trying to make sense of the conversation.

There's even a phrase you won't have seen in many mainstream thinkpieces about AI, predicting a near future of clever algorithms taking middle-class jobs. Brace yourselves, dear readers. Facebook engineers will now focus on "training [Messenger] based on a narrower set of cases so users aren't disappointed by the limitations of automation."

Ah.

"Their discussions are much more grounded in reality now compared to last year," said another person close to the Messenger developers. "The team in there now is finding ways to activate commercial intent inside Messenger. It's much less about, 'We'll dominate the world with AI.'"

Analyst Richard Windsor describes Facebook as "the laggard in AI," failing to match the results Google. "The problems that it has had with fake news, idiotic bots and Facebook M, all support my view that when Facebook tries to automate its systems, things always go wrong. The problem is not that Facebook does not have the right people but simply that it has not been working on artificial intelligence for nearly long enough," he wrote recently.

In its exclusive, The Information also notes that Facebook has been grappling with what we call the "Clippy The Paperclip problem": the user views the contribution by the agent, or bot, as intrusive.

"Clippy didn't fail for lack of good intentions, or contextual unawareness, but because the interruption was inappropriate," we noted.

Facebook is expected to unveil its revised plans for its chat AI stuff at this year's F8 developer conference in April. ®

Similar topics

Narrower topics


Other stories you might like

  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading
  • To multicloud, or not: Former PayPal head of engineering weighs in
    Not everyone needs it, but those who do need to consider 3 things, says Asim Razzaq

    The push is on to get every enterprise thinking they're missing out on the next big thing if they don't adopt a multicloud strategy.

    That shove in the multicloud direction appears to be working. More than 75 percent of businesses are now using multiple cloud providers, according to Gartner. That includes some big companies, like Boeing, which recently chose to spread its bets across AWS, Google Cloud and Azure as it continues to eliminate old legacy systems. 

    There are plenty of reasons to choose to go with multiple cloud providers, but Asim Razzaq, CEO and founder at cloud cost management company Yotascale, told The Register that choosing whether or not to invest in a multicloud architecture all comes down to three things: How many different compute needs a business has, budget, and the need for redundancy. 

    Continue reading

Biting the hand that feeds IT © 1998–2022