GenAI Exposes Enterprise Sustainability Hypocrisy
Estimates of GenAI’s carbon footprint are all over the place. How big is the problem? Find out more in this article.
Join the DZone community and get the full member experience.
Join For FreeLast year, I met with the head of sustainability at a large software vendor. He was touting his company’s commitment to becoming carbon net-zero by 2030 – balancing its responsibility for the production of greenhouse gases with an equal commitment to the removal of such gases (mainly carbon dioxide, or CO2) from the atmosphere.
Many companies – both software vendors as well as a broad spectrum of organizations across industries – have a similar promise.
There are two sides to this commitment: reduction of the amount of carbon put into the atmosphere, as well as investments in technologies that remove such carbon.
One would expect, therefore, for vendors with sustainability priorities to be putting in substantial efforts to manage and reduce their own carbon footprints. The last thing they would want to do is to jump into a new initiative that would add a massive new contribution of CO2.
Except that’s exactly what most software vendors are doing with generative AI (GenAI). GenAI generates massive quantities of CO2 – perhaps more than any previous tech innovation.
What about all those vendors’ sustainability initiatives? It seems that any attention that had been focused on sustainability through mid-2023 is now swiveled to GenAI like the eye of Sauron.
What gives?
How Big Is the Problem?
With GenAI, the more data the better. To process all that data, organizations need massive numbers of electricity-chugging GPUs. It doesn’t matter if they’re in the cloud, in some colo somewhere, or in an on-premises data center. GenAI’s electricity bills are staggering.
Estimates of GenAI’s carbon footprint are all over the place. One estimate placed the carbon production of getting a single GPT-3 model ready to launch to 552 tons of carbon dioxide, the equivalent of 1,287-megawatt hours of electricity – not including the cost of actually running the model in production.
Another study estimates that by 2027, all AI servers taken together would consume between 85 and 134 terawatt hours each year, the equivalent of between 36 and 57 million tons of CO2.
Placed into context, global aviation generated 920 million tons of CO2 in 2019, so all AI taken together would still be less than 10% of aviation – but a massive amount nevertheless.
Researchers at Meta go further. They have estimated that everyday usage of the large language models that support genAI will consume all the world’s energy production by 2040 – a mere sixteen years from now.
How Do We Fix It?
In spite of these frightening numbers, no one is saying that they should forego genAI because of its deleterious impact on global climate change. The best answers that vendors are offering are merely ways to mitigate the effects.
Such mitigation techniques have become familiar over the last decade. Perhaps organizations should run their AI routines at night, or at other times electricity consumption is lower than average. Other companies suggest placing data centers that offer genAI capabilities in locations that consume electricity from renewable sources.
There are also discussions of tweaking genAI processes to consume less electricity and thus produce less CO2, for example, by feeding smaller data sets into genAI model training. If 70% of the data yields results that are almost as good, then such reduction makes economic and sustainability sense.
Such a reduction in the size of data sets, however, goes against the trends in the industry. Customers are demanding better, more accurate results from GenAI, and better results require more, better data and thus more electricity.
The Big Fallacy
All these alternatives suffer from an underlying fallacy that is ubiquitous across the sustainability landscape.
The fallacy is in how we evaluate offsets. It’s OK to generate more CO2, the argument goes, as long as we offset that generation with an action that reduces the CO2 in the atmosphere. The fallacy is the conclusion that such offsets make the additional generation of CO2 somehow OK. It’s not. In reality, we must reduce the production of CO2. Any argument that justifies increasing its production is fallacious rationalization.
In truth, if we have the option of taking action to reduce our carbon footprint, wouldn’t it make more sense to reduce the CO2 without generating more CO2?
If we’re only considering mitigation actions that compensate for new initiatives, then we’ll never make any progress. Instead, those mitigation actions are mere window-dressing – things we do to make us feel better or provide good PR so that we can proceed dumping carbon into the atmosphere with impunity.
Comparing GenAI to Cryptocurrency
If this fallacy sounds familiar, it’s no wonder. This fallacy is most apparent in the world of cryptocurrency – Bitcoin and other proof-of-work crypto in particular.
The sustainability equation for crypto is particularly stark, as proof-of-work consumes so much electricity while crypto itself provides absolutely no value.
When crypto miners propose mitigation actions like running their mines near hydroelectric plants or building wind farms to power them, they are falling into this fallacy. Wouldn’t it be better to build the wind farm while turning off the crypto mines anyway?
Comparing GenAI to crypto, however, is a bit of apples-to-oranges, as GenAI differs from crypto in that it arguably has utility. However, even this point is in question. Just how many conversational chatbots and Taylor Swift deepfakes do we really need? Is all this genAI activity worth its effects on the environment?
We can nevertheless argue that the value we get from such AI justifies its carbon footprint. That argument is perhaps the most common one, after all.
But that argument is different from saying that we can be justified in generating all the extra CO2 from genAI because we’ve located our data centers in Iceland or invested in a wind farm somewhere.
My Take
It might sound like I’m arguing that we should stop building and using GenAI. That position, however, cannot stand up to reality. The reality is that innovation in AI will continue to progress, while commercial uses of the technology will undoubtedly grow. We’re stuck with GenAI whether we like it or not, carbon footprint be damned.
The hypocrisy appears when organizations say they are committed to their sustainability goals but nevertheless proceed with their GenAI initiatives. After all, sustainability is great, but not if it adversely impacts profitability, right?
What we are serious about when we strip out the hypocrisy is the goal of achieving a tolerable balance. Yes, we’re OK with rising temperatures, extreme weather, and losing low-lying land to the oceans, as long as we can have our technology-infused lifestyles – and those lifestyles include GenAI.
When vendors tout their carbon-zero sustainability goals while simultaneously pouring money into GenAI, however, they are simply being hypocritical. Sustainability is not a PR effort. It’s about saving the planet, remember?
Published at DZone with permission of Jason Bloomberg, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments