Think tank funded by Big Tech argues AI’s climate impact is nothing to worry about
Sure, datacenters consume lots of energy. But maybe they'll invent stuff that helps
It's well established that the tens of thousands of GPUs used to train large language models (LLMs) consume a prodigious amount of energy, leading to warnings about their potential impact on Earth’s climate.
However, according to the Information Technology and Innovation Foundation's Center for Data Innovation (CDI), a Washington DC-based think tank backed by tech giants like Intel, Microsoft, Google, Meta, and AMD, the infrastructure powering AI isn’t a major threat.
In a recent report [PDF], the Center posited that many of the concerns raised over AI's power consumption are overblown and draw from flawed interpretations of the data. The group also contends that AI will likely have a positive effect on Earth’s climate by replacing less efficient processes and optimizing others.
"Discussing the energy usage trends of AI systems can be misleading without considering the substitutional effects of the technology. Many digital technologies help decarbonize the economy by substituting moving bits for moving atoms," the group wrote.
The Center’s document points to a study [PDF] by Cornell University that found using AI to write a page of text created CO2 emissions between 130 and 1,500 times less than those created when an American performed the same activity using a standard laptop - although that figure also includes carbon emissions from living and commuting. A closer look at the figures, however, show they omit the 552 metric tons of CO2 generated by training ChatGPT in the first place.
The argument can be made that the amount of power used to training an LLM is dwarfed by what's consumed by deploying it — a process called inferencing — at scale. AWS estimates that inferencing accounts for 90 percent of the cost of a model, while Meta puts it at closer to 65 percent. Models are also retrained from time to time.
The CDI report also suggests that just as a smart thermostat can reduce a home's energy consumption and carbon footprint, AI could achieve similar efficiencies by preemptively forecasting grid demand. Other examples included using AI to make how much water or fertilizer farmers should use for optimal efficiency, or tracking methane emissions from satellite data.
Of course, for us to know whether AI is actually making the situation better, we need to measure it, and according to CID there's plenty of room for improvement in this regard.
Why so many estimates get it wrong
According to the Center for Data Innovation, this isn't the first time technology's energy consumption has been met with sensationalist headlines.
The group pointed to one claim from the peak of the dot-com era that estimated that the digital economy would account for half the electric grid's resources within a decade. Decades later and the International Energy Agency (IEA) estimates that datacenters and networks account for just 1-1.5 percent of global energy use.
That’s a lovely number for the Center’s backers, whose various deeds have earned them years of antitrust action that imperils their social license.
But it’s also a number that’s hard to take at face value, because datacenters are complex systems. Measuring the carbon footprint or energy consumption of something like training or inferencing an AI model is therefore prone to error, the CDI study contends, without irony.
One example highlighted cites a paper by the University of Massachusetts Amherst that estimates the carbon footprint of Google's BERT natural language processing model. This information was then used to estimate the carbon emissions from training a neural architecture search model which rendered a result of 626,155 pounds of CO2 emissions.
The findings were widely published in the press, yet, a later study showed the actual emissions were 88 times smaller than initially thought.
- Untangling Meta's plan for its homegrown AI chips, set to actually roll out this year
- Singtel does the 'we're building datacenters to host Nvidia clusters' thing
- Investors threw 50% less money at quantum last year
- China puts homegrown GPUs and other AI infrastucture on its national to-do list
Where estimates are accurate, the report contends that other factors, like the mix of renewable energy, the cooling tech, and even the accelerators themselves, mean they are only really representative of that workload at that place and time.
The logic goes something like this: If you train the same model two years later using newer accelerators, the CO2 emissions associated with that job might look completely different. This consequently means that a larger model won't necessarily consume more power or produce more greenhouse gasses as a byproduct.
There are a few reasons for this but one of them is that AI hardware is getting faster, and another is that the models that make headlines may not always be the most efficient, leaving room for optimization.
From this chart, we see that more modern accelerators, like Nvidia's A100 or Google's TPUv4 have a larger impact on emissions than parameter size. - Click to enlarge
"Researchers continue to experiment with techniques such as pruning, quantization, and distillation to create more compact AI models that are faster and more energy efficient with minimal loss of accuracy," the author wrote.
The CID report's argument appears to be that past attempts to extrapolate power consumption or carbon emissions haven't aged well, either because they make too many assumptions, are based on flawed measurements, or they fail to take into account the pace of hardware or software innovation.
While there's merit to model optimization, the report does seem to overlook the fact Moore's Law is slowing down and that generational improvements in performance aren't expected to bring matching energy efficiency upticks.
Improving visibility, avoiding regulation, and boosting spending
The report offers several suggestions for how policymakers should respond to concerns about AI's energy footprint.
The first involves developing standards for measuring the power consumption and carbon emissions associated with both AI training and inferencing workloads. Once these have been established, the Center for Data Innovation suggests that policymakers should encourage voluntary reporting.
"Voluntary" appears to be the key word here. While the group says it isn't opposed to regulating AI, the author paints a Catch-22 in which trying to regulate the industry is a lose-lose scenario.
"Policymakers rarely consider that their demands can raise the energy requirements to train and use AI models. For example, debiasing techniques for LLMs frequently add more energy costs in the training and fine-tuning stages," the report reads. "Similarly implementing safeguards to check that LLMs do not return harmful output, such as offensive speech, can result in additional compute costs during inference."
In other words, trying to mandate safeguards and you might make the model more power hungry; mandate power limits and risk making the model less safe.
Unsurprisingly, the final recommendation calls for governments, including the US, to invest in AI as a way to decarbonize their operations. This includes employing AI to optimize building, transportation, and other city-wide systems.
"To accelerate the use of AI across government agencies toward this goal, the president should sign an executive order directing the Technology Modernization Fund… include environmental impact as one of the priority investment areas for projects to fund," the group wrote.
Of course all of this is going to require better GPUs and AI accelerators, either purchased directly or rented from cloud providers. That's good news for technology companies, which produce and sell the tools necessary to run these models.
So it's not surprising, Nvidia was keen to highlight the report in a recent blog post. Nvidia has seen its revenues skyrocket in recent quarters as demand for AI hardware reaches a fever pitch. ®