Google says it did not train its AI chatbot Bard on your private emails
ALSO: Web traffic to Microsoft Bing up 15.8 per cent since launch of GPT-4 bot, and more
AI In Brief Google did not train its internet web search chatbot Bard on text from private Gmail accounts, a spokesperson confirmed to The Register.
An AI researcher quizzed Bard on where its training data came from, and was surprised when it mentioned internal data from Gmail. The former Google employee, Blake Lamoine – who was fired for leaking company secrets and believing its large language model (LLM) LaMDA was sentient – claimed that it was, indeed, trained on text from Gmail.
The Register asked Google for comment, and a representative told us in a statement: "Like all LLMs, Bard can sometimes generate responses that contain inaccurate or misleading information while presenting it confidently and convincingly. This is an example of that. We do not use personal data from your Gmail or other private apps and services to improve Bard."
Google launched Bard this week, and invited netizens in the US and UK to join the waitlist to talk to the chatbot. So far, Bard doesn't seem to generate text as erratic and unhinged as the earlier tests of Microsoft's Bing – but can still be prompted to reply to inappropriate requests, and is prone to making up false information.
FYI … Databricks has produced a large language model called Dolly that's sort of cloned from the more recent Alpaca and older GPT-J models.
"We show that anyone can take a dated off-the-shelf open source large language model (LLM) and give it magical ChatGPT-like instruction-following ability by training it in 30 minutes on one machine, using high-quality training data," the Databricks team claimed.
Web traffic to Microsoft's Bing site up since chatbot launch
Netizens have taken to Microsoft's Bing more since the company launched its new GPT-4-powered internet search chatbot, increasing its web page visits by 15.8 percent.
Microsoft released Bing to people who signed up to join a waitlist on February 7 and has since onboarded hundreds of millions of users. Meanwhile, Google lagged behind and just launched Bard this week on Tuesday.
The headstart has given Microsoft a boost in web traffic, and now the challenge is to maintain growth and win over Google users. Gil Luria, an analyst at D.A. Davidson & Co, an investment banking company, told Reuters: "Bing has less than a tenth of Google's market share, so even if it converts one [per cent] or two [per cent] of users it will be materially beneficial to Bing and Microsoft".
Increased page views will improve Microsoft's online search and advertising business, an area that is dominated by Google. Data analytics firm Similarweb found that from February 7 to March 20, page visits to Google dropped by one per cent. It's difficult to know whether that tiny dip had anything to do with Bing, and it'll be interesting to see if this changes in the future now that Bard is available.
- OpenAI rolls out ChatGPT plugins, granting iffy language model access to your apps
- Forget general AI, apparently zebrafish larvae can count
- ChatGPT, how did you get here? It was a long journey through open source AI
- French parliament says oui to AI surveillance for 2024 Paris Olympics
AI won't replace bankers yet
ChatGPT cannot pass exams required for chartered financial analysts tested for their knowledge on a range of complex topics from statistics, economics, and management.
Sample multiple choice questions were fed into the model, and it was asked to respond with the right answer with a summary explanation. ChatGPT only managed to get 8 out of 24 problems right, failing to pass by official CFA standards. "ChatGPT was able to accurately describe spread duration in relation to callable and non-callable bonds. But it picked the wrong portfolio to suit a bull market and used garbage maths to overestimate by threefold an expected six-month excess return," according to experiments run by The Financial Times.
In one example, the bot said there was insufficient information to answer and refused to try. Large language models are good at generating convincing-looking text that is coherent and grammatically correct. But they are incapable of reasoning and cannot tell fact from fiction – making them unsuitable for specific tasks like providing financial advice.
OpenAI's latest GPT-4 model is more powerful, and would probably fare better. It was reportedly able to pass the bar exam, for example, and appeared to perform well on other tests like AP Maths and Environmental Science. Interestingly, it flunked AP English and AP English Literature. A pair of researchers from Princeton University warned that testing language models on exam questions may not be the best way to benchmark their performance. They may have already memorized the answers to questions if they are in their training data.
GPT-4, for example, is much better at answering programming questions from the competitive code site CodeForces if they're from 2021 but is terrible at solving more recent problems. GPT-4 was trained on text scraped from the internet up until September 2021.
"Benchmarks are already wildly overused in AI for comparing different models. They have been heavily criticized for collapsing a multidimensional evaluation into a single number. When used as a way to compare humans and bots, what results is misinformation," the researchers said.
Will Hollywood allow AI-written scripts?
The Writer's Guild of America – a labor union representing writers for films, TV shows, and the like – supports the use of AI tools to create scripts.
Writers should be allowed to use generative text software to help produce scripts with humans taking full credit. The plot or dialog for a screenplay could be generated by AI but edited by writers, for example. Or writers could use these tools for inspiration in their own work.
Under a proposal from the WGA, AI-generated text will not be classified as "literary material" or "source material", Variety, an entertainment magazine, reported. That way, TV and film studios won't need to worry about accreditation issues and potentially having to compensate companies who built the software used to write a manuscript.
Professional writers would get full credit for their work, and receive compensation for it. The proposal is currently being debated between the WGA and the Alliance of Motion Picture and Television Producers, a trade group representing hundreds of TV and film studios in America. ®