AGI is on clients' radar but far from reality, says Gartner
Controversial concept may not even be a useful goal in computing
Gartner is warning that any prospect of Artificial General Intelligence (AGI) is at least 10 years away and perhaps not certain to ever arrive. It might not even be a worthwhile pursuit, the analyst says.
AGI has become a controversial topic in the last couple of years as builders of large language models (LLMs), such as OpenAI, make bold claims that they've established a near-term path toward human-like intelligence. At the same time, others from the discipline of cognitive science have scorned the idea, arguing that the concept of AGI is poorly understood and the LLM approach is insufficient.
In its Hype Cycle for Emerging Technologies, 2024, Gartner says it distills "key insights" from more than 2,000 technologies and, using its framework, produces a succinct set of "must-know" emerging technologies that have the potential to deliver benefits over the next two to ten years.
The consultancy notes that GenAI – the subject of volumes of industry hype and billions in investment – is about to enter the dreaded "trough of disillusionment." Arun Chandrasekaran, Gartner distinguished VP analyst, told The Register:
"The expectations and hype around GenAI are enormously high. So it's not that the technology, per se, is bad, but it's unable to keep up with the high expectations that I think enterprises have because of the enormous hype that's been created in the market in the last 12 to 18 months."
However, GenAI is likely to have a significant impact on investment in the longer term, Chandrasekaran said. "I truly still believe that the long-term impact of GenAI is going to be quite significant, but we may have overestimated, in some sense, what it can do in the near term."
As for the short term? There will inevitably be some twists, turns and bumps on the way. AI expert Gary Marcus wrote an article at the start of this month claiming the "collapse of the Generative AI bubble may be imminent" in the "financial sense".
"To be sure, Generative AI itself won’t disappear. But investors may well stop forking out money at the rates they have, enthusiasm may diminish, and a lot of people may lose their shirts. Companies that are currently valued at billions of dollars may sold, or stripped for parts."
This is based on what he sees as "no robust solution to hallucinations"; "modest lasting corporate adoption"; and "modest profits".
Previous research by Gartner indicates it doesn't expect a payback from office AI for at least two years in terms of mainstream adoption. As of March, Microsoft was still trying to convince customers of producitivty benefits.
- Gartner mages: Payback from office AI expected in around two years
- How deliciously binary: AI has yet to pay off – or is transforming business
- Gartner nudges down global IT spending growth forecast as 'change fatigue' persists
- 64% of people not happy about idea of AI-generated customer service
Also included in Gartner's Emerging Technology Hype Cycle is AGI, which the consultancy says is rising in the "peak of inflated expectations" and might have an impact in more than ten years.
Chandrasekaran told us it was not the first time AGI had appeared on the hype cycle. "Users were asking about it, so we needed to have a point of view. We're not going to get to AGI anytime soon. That's not what we're seeing here at all. All we're seeing essentially is that AGI is a goal for many of these AI research labs, but it's going to take an enormous amount of effort."
It remains unclear whether the LLM research labs are taking the right approach. "There's a belief that if the models get bigger and bigger at some point, we've got to get to AGI, and I don't think that is likely to be the case," Chandrasekaran said. "We have to think about how we induce some of these concepts, like reasoning, for example, into the models. We also have to make the models learn about the world the way human beings learn about the world, which is through our senses."
He argued there is no clear consensus within the research community on whether AGI is a goal worth pursuing. "Even the timeline for reaching it or even what AGI means is uncertain. I believe that machines are good at certain things and human beings are good at certain things, and I don't know whether trying to create a machine that thinks and acts like a human being may be the most desirable or the most optimal goal." ®