BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Untold Story Of AI's Huge Carbon Footprint

Forbes Technology Council

Lee-Lean Shu, CEO, GSI Technology.

AI is a significant emitter of CO2. Researchers at the University of Massachusetts Amherst analyzed various natural language processing (NLP) models to estimate the energy cost required to train them and concluded that the carbon footprint of training a single large language model is around 600,000 pounds of CO2 emissions, which is the equivalent of 125 round-trip flights between New York and Beijing.

It’s time to think and work in new ways if we are to minimize the most dire impacts of climate change. We’re already seeing the effect of rising temperatures across the globe, from parching droughts to deadly heat waves to rampant wildfires to flash floods. In Earth’s 4.6 billion years of existence, our planet has never heated as fast as it is now.

Action is imperative. And one action the tech world can take is to make hardware and chipsets designed for both high performance and power efficiency a major component in the design of future AI applications. Every hyperscaler and AI service provider—whether it’s OpenAI, Google, Amazon or Anthropic—is clamoring for ever more powerful processors to handle their AI models and workloads. But, as they do, they should take climate change into consideration and seek out high-performance processors that use less energy.

If they don’t, the consequences will be severe. The large language models used in generative AI will require more and more processing performance and compute resources to keep getting better and more accurate. Based on the compute resources we have right now, it would take several countries' worth of energy consumption to make the next leap forward.

AI's Ever-Increasing Demand For Power

Humans have the capacity to do great good and great harm—and do it simultaneously. In agriculture, for example, humans deployed the science of industrial-chemical farming to increase yields tremendously. But at the same time, we destroyed sustainable organic farming systems in place for generations and put tons of carbon into the atmosphere instead of in the soil, where it is needed for crop growth.

In the world of technology, humans have conjured up ever smaller, ever cheaper, ever more powerful microchips for years. The ability to keep cramming more circuitry into smaller and smaller form factors inevitably caused us to keep the same designs but with more transistors.

The result is that we not only exponentially increased the density of silicon technology but we ended up massively increasing the power consumption of the final products. Power supplies for PCs about 15 years ago were around 150W. Today, a PC can come with a 1500W supply.

This increase in energy demand is now happening with AI—but on steroids. Indeed, the amount of energy consumed by AI is almost incomprehensible. It’s so large that most of us would rather not think about it as we enjoy the advantages it provides.

But you don’t fight fire by ignoring it, and we can’t close our eyes to the large contribution AI is making to rising global temperatures. The focus has shifted to its benefits, whether it’s better decision-making, greater productivity or enhanced innovation. However, an even greater focus on the costs of prolific AI usage is also needed.

Clearly, today, we have this dilemma: We need more power to develop the next generation of AI, but we need to deliver that power at a much lower output so that we consume less energy. Companies are scrambling to develop new generations of AI. And chipmakers are complying, feeding the frenzy to push the boundaries of AI with ever more powerful—and power-hungry—processors.

AI’s power demand is amplified by the fact that most developers use a process called “sharding,” by which they break up their workloads into small pieces and then run each piece on a separate power-hungry GPU. Sharding eats massive amounts of energy now and will eat much more in the future. New generations of AI will require vastly more power, so much so that there is real doubt our current power grid and our planet can handle it.

We need a paradigm shift. We need to increase the power of processors to run new AI models, but at the same time, we need to reduce the energy demand of those processors.

A Processor That Solves The Problem

As AI usage expands and the impact on our planet grows, it will be imperative to find more power-efficient alternatives to the processor technologies available now. The good news is that there is a way to create more powerful yet less power-hungry processors for AI.

Efforts are underway to create powerful processors specifically tailored for AI applications, focusing on energy efficiency through innovative technology. By reducing system footprints, significant reductions in power consumption can be achieved, benefiting both the environment and users by lowering overall ownership costs. These processors offer the high-performance capabilities required by AI while consuming less power.

At the forefront of these efforts lies compute-in-memory, a method that addresses real-time processing with significantly lower power requirements, resulting in a notable increase in processing speeds by eliminating system inefficiencies.

The next generation of AI applications will heavily rely on new hardware and chipsets engineered for improved performance and energy efficiency. These advancements offer a promising solution, providing AI applications with the necessary processing power while delivering substantial energy savings. This not only benefits the advancement of AI but also contributes positively to the sustainability of our planet.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Check out my website