This article is more than 1 year old
Google's AI chatbot Bard catches up to generating code
NSFW? You'll need to check outputs for accuracy. Plus: OpenAI CEO says massive model era's over, Microsoft said to be building custom AI chip
In brief Bard, Google's AI-powered internet search chatbot, can now generate and help debug code in over 20 different programming languages.
Users can instruct Bard to solve programming tasks and ask it to fix or explain snippets of code in C++, Go, Java, JavaScript, Python, Typescript, as well as generating functions to analyse data for Google Sheets. That's all well and good, except in cases where the bot writes wrong or bad code.
"Bard is still an early experiment, and may sometimes provide inaccurate, misleading or false information while presenting it confidently. When it comes to coding, Bard may give you working code that doesn't produce the expected output, or provide you with code that is not optimal or incomplete," Paige Bailey, Group Product Manager at Google Research, warned in an official blog post.
As always with these newfangled AI chatbots, be careful to check its outputs for accuracy. That may be more difficult for beginner programmers, and it's not clear if these tools are always worth using if it can take more time and effort to understand and correct their code than developers writing it themselves.
"Despite these challenges, we believe Bard's new capabilities can help you by offering new ways to write code, create test cases, or update APIs. If Bard quotes at length from an existing open source project, it will cite the source," Bailey added.
Era of giant AI models is over, says OpenAI CEO
Making neural networks bigger may not be the way forward to progressing AI capabilities, according to OpenAI CEO Sam Altman.
Building models with more parameters typically leads to better performance as demonstrated by OpenAI's development of its GPT-based models. But it may not be worth scaling systems as it becomes increasingly costly to train, and the improvements may not be worth it.
"I think we're at the end of the era where it's going to be these, like, giant, giant models," he said at an event held at MIT last week. "We'll make them better in other ways."
Altman believes developers will have to find new methods and techniques to improve neural networks without necessarily making them much larger than today's models. OpenAI, for example, has leaned into using reinforcement learning with human feedback for ChatGPT, which guides the model to generate text that is more appropriate and human-like.
AI is still a relatively immature field; industry will no doubt evolve as new breakthroughs in machine learning, neural network architecture, and hardware emerge.
Microsoft is developing its own custom AI chip
Microsoft is reportedly designing its own custom AI processors, codenamed Athena, as it continues providing the computational resources needed for OpenAI to develop and deploy its technologies.
Microsoft invested $10 billion into OpenAI and has signed a deal to exclusively license the startup's technology. OpenAI's GPT-4, for example, is powering Microsoft's internet search chatbot, Bing.
- OpenAI CEO confirms biz is not currently training GPT-5
- Meta has nothing to say about politicians making deepfaked ads
- Google denies Bard trained using OpenAI ChatGPT responses
- Google says it did not train its AI chatbot Bard on your private emails
The new chip can be used to both train and run AI models, and hardware engineers began working on its custom design in 2019, according to The Information.
By building its own silicon, Microsoft can optimize the design to support OpenAI's technology, its cloud customers, and its own AI-powered products too. The company could also stand to save money if it doesn't have to rely as much on third-party hardware vendors like Nvidia.
OpenAI employees have reportedly been helping Microsoft test Athena.
Generative AI and healthcare
Microsoft and Epic, the creators of electronic health record-keeping software used by thousands of hospitals in the US, are collaborating to apply generative AI technologies for healthcare.
Developers will build new AI features and services on top of Epic's software using Microsoft tools running on its Azure OpenAI Service. One project using Epic's self-service reporting tool, SlicerDicer, will help clinicians explore and extract relevant data using GPT-4.
"Our exploration of OpenAI's GPT-4 has shown the potential to increase the power and accessibility of self-service reporting through SlicerDicer, making it easier for healthcare organizations to identify operational improvements, including ways to reduce costs and to find answers to questions locally and in a broader context," Seth Hain, senior vice president of research and development at Epic, said in a statement.
UC San Diego Health, UW Health in Madison, Wisconsin, and Stanford Health Care hospitals have reportedly already tapped into Microsoft's resources to deploy software that automatically drafts message responses. Epic and Microsoft believe generative AI can help healthcare organizations be more productive and efficient. ®