Google's AI search bot Bard makes $120b error on day one
'This highlights the importance of a rigorous testing process' says Choc Fact. No sh%t...
About 10 percent of Alphabet's market value – some $120 billion – was wiped out this week after Google proudly presented Bard, its answer to Microsoft's next-gen AI offerings, and the system bungled a simple question.
In a promotional video to show off Bard, a web search assistant to compete against Microsoft's ChatGPT-enhanced Bing, the software answered a science question incorrectly, sending Alphabet's share price down amid an overall lackluster launch by the Chocolate Factory.
Microsoft's integration of OpenAI's super-hyped language models into the Bing search engine and Edge web browser has ignited an arms race. Microsoft wants to eat into Google's web search monopoly by offering a better search engine that uses OpenAI's ChatGPT to answer queries in a conversational way with natural language rather than simple lists of links to relevant webpages.
The idea being that the bot is trained on fresh snapshots of the web, and netizens' web search requests are answered automatically by the bot with summaries of info scraped from the internet.
That's a bit of a Bard start, isn't it?
The Chocolate Factory is not about to give up any of its territory without a fight, though it stumbled at the first hurdle with its launch of ChatGPT rival Bard on Wednesday.
In an example query-response offered by Google's spinners, Bard was asked to explain discoveries made by NASA's James Webb Space Telescope (JWST) at a level a nine-year-old would understand. Some of the text generated by the model, however, was wrong.
Bard claimed "JWST took the very first pictures of a planet outside of our own solar system," yet the first image of just such an exoplanet, 2M1207b, was actually captured by the European Southern Observatory's Very Large Telescope in 2004, according to NASA.
Although language models can generate text that is initially coherent and grammatically correct, they tend to also confidently spew false information. The above error somehow made it past Google's various engineering, legal, PR, and marketing depts, and found its way into a demo of Bard, right when issues of accuracy and trust are at the top of everyone's minds.
"This highlights the importance of a rigorous testing process, something that we're kicking off this week with our Trusted Tester program," a spokesperson from Google told The Register in a statement.
"We'll combine external feedback with our own internal testing to make sure Bard's responses meet a high bar for quality, safety and groundedness in real-world information."
- Google shows off upcoming AI search features, leaves Bard waiting in the wings
- Microsoft tells people to prepare for AI search engine that goes Bing!
- Conversational AI tells us what we want to hear – a fib that the Web is reliable and friendly
- China's Baidu reveals generative AI chatbot based on language model bigger than GPT-3
If Microsoft and Google are to use AI chatbots as the new user interface for web search, they better make sure their technology generates factual up-to-date information if they want people to use it.
Google announced Bard at the turn of the week, and presented an event geared towards new AI-powered search abilities in its apps on Wednesday. The presentation, led by Prabhakar Raghavan, Google SVP for search, Assistant, however, revealed little more on Bard, and instead, focused on other features like machine translation abilities for its visual app Lens and 3D mapping for Google Maps.
Meanwhile, Microsoft on Tuesday teased a preview version of its OpenAI-boosted Bing that people can eventually use, fingers crossed, and announced features coming to its Chromium-based Edge browser. Google plans to integrate Bard into its own search engine, though it's not clear when it'll be generally available yet. ®