Satya Nadella wants to make Google dance in battle for AI chat-powered web search
Plus: Publisher using AI tools generated false health advice for men; how ChatGPT widens economic inequalities
In brief Microsoft CEO Satya Nadella has been waiting for the chance to challenge Google's dominance of internet search, and just might have finally pulled it off this week with the launch of AI-powered Bing.
Both companies believe language model chatbots will be the new interface of search. Instead of sifting through information across multiple websites to find what you're looking for, AI will summarize text and generate relevant information for you in a conversational manner.
Microsoft has integrated OpenAI's latest tools – reportedly more powerful than ChatGPT – into the Bing search engine that will be coming soon to its Edge browser. Meanwhile Google promised to deploy Bard – a chatbot built from its LaMDA language model – for Google Search.
Nadella knows Microsoft is starting from behind in this race. "They're the 800-pound gorilla in this … And I hope that, with our innovation, they will definitely want to come out and show that they can dance. And I want people to know that we made them dance, and I think that'll be a great day," he said in an interview with The Verge.
Google hasn't made a good start in its efforts to convince world+dog it's a chatbot player, having released a demo of Bard that included a factual error. But if any company has the resources and experience to nail internet search, it's big ol' Google.
AI gave bad health and medical advice in online publication
Another week, another publisher outed for using AI to generate articles riddled with factual errors.
This time its Arena Group, owner of sports, entertainment, and health-related outlets like Sports Illustrated and Men's Journal.
An article discussing the reasons for low testosterone in men, written with the help of AI, was found to contain several inaccuracies. According to a report in Futurism, it linked low testosterone levels to various factors including psychological symptoms and poor diet that aren't backed up by solid scientific evidence.
"The original version of this story described testosterone replacement therapy as using 'synthetic hormones' and stated poor nutrition as one of the most common causes of low T, which are inaccurate," the article stated after changes were made.
Bots like ChatGPT may write text that seems convincing, but they often struggle to present accurate facts. Still, that hasn't stopped media businesses like CNET and Arena Group from using them. They believe these tools enable editorial teams to crank out more clickbait quickly, but to date quality appears to have been sacrificed for speed.
If editors spend too much time fact checking or rewriting the text, what's the point of using these tools in the first place?
- Conversational AI tells us what we want to hear – a fib that the Web is reliable and friendly
- AI may finally cure us of our data fetish
- Roses are red, algorithms are blue, here's a poem I made a machine write for you
- Google's AI search bot Bard makes $120b error on day one
Getty claims Stability AI stole 12 million images
Stock image biz Getty Images has filed a second lawsuit against Stability AI, the UK-based startup best known for its text-to-image Stable Diffusion model, for copyright infringement.
The latest lawsuit [PDF], filed in the US this time, claims Stability has committed "brazen infringement of Getty Images' intellectual property on a staggering scale" by illegally copying more than 12 million photographs "to build a competing business." Getty also accused Stability of trying to scrub the company's copyright management information, and that images generated by Stable Diffusion contain its watermarks – which would prove their origin.
Last September, the company initially banned AI-generated artwork on its image platform over fears it could be held legally responsible for hosting content protected by copyright. Since then it has announced it partnered up with BRIA, a generative AI startup, to explore how the technology could be used on its site.
Now it believes that its competitor, Stability, has unfairly scraped its images without explicit permission – and it wants to be compensated.
"Getty Images provided licenses to leading technology innovators for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights," it previously said in a statement. "Stability AI did not seek any such license from Getty Images and instead, we believe, chose to ignore viable licensing options and long‑standing legal protections in pursuit of their stand‑alone commercial interests."
FDA Orphan Drug Designation approval for AI-designed drug
The US Food and Drug Administration granted the Orphan Drug Designation (ODD) to Insilico Medicine for a molecule designed by the company's AI platform to tackle idiopathic pulmonary fibrosis – a rare type of chronic lung disease.
Under ODD status, pharmas are eligible for federal grants and tax credits to pursue clinical trials, receive a seven-year marketing exclusivity period upon FDA approval, and are exempt from charging prescription drug user fees from manufacturers.
It is a separate process from the FDA's normal approval process for new drugs, and incentivizes drug companies to develop treatments for rare diseases affecting less than 200,000 people, even though it will be less lucrative.
Insilico started early clinical trials for its molecule, INS018_055, to treat IPF in New Zealand and China last year. The preliminary results from those trials led the FDA to grant the AI-designed molecule ODD, paving the way for the startup to develop the drug for real patients.
"We are pleased to announce that Insilico has achieved numerous drug discovery milestones and provided new clinical hope using generative AI," Alex Zhavoronkov, CEO of Insilico, said in a statement. "We are progressing the global clinical development of the program at top speed to allow patients with fibrotic diseases to benefit from this novel therapeutic as soon as possible."
ChatGPT will take jobs and widen economic inequalities
Experts in economics and AI believe tools like ChatGPT will take millions of jobs and worsen the wealth disparity between the rich and poor.
Lawrence Katz, a labour economist at Harvard, told The Guardian that technology has always led to jobs changing. "I have no reason to think that AI and robots won't continue changing the mix of jobs. The question is: will the change in the mix of jobs exacerbate existing inequalities? Will AI raise productivity so much that even as it displaces a lot of jobs, it creates new ones and raises living standards?"
ChatGPT can generate a wide range of text to perform different tasks like answering questions, writing essays or code, or summarizing documents. Its capabilities are already impacting industries from customer service, to marketing, and advertising, and is poised to affect journalism, law, and engineering too.
Meanwhile, William Spriggs, an economics professor at Howard University and chief economist at the trade union AFL-CIO, had a pessimistic answer to Katz's question.
"If you make workers more productive, workers are then supposed to make more money. Companies don't want to have a discussion about sharing the benefits of these technologies. They'd rather have a discussion to scare the bejesus out of you about these new technologies. They want you to concede that you're just grateful to have a job and that you'll [get paid] peanuts." ®