Chip Industry Leaders Warn of Energy Crisis If AI Power Consumption is Not Addressed

Large, complex AI models like ChatGPT rely on data centers that devour massive amounts of power. Chip leaders worry that without an imminent solution, soaring demand for AI could spark an energy crisis.

Thanks to the popularity of artificial intelligence (AI) applications like OpenAI’s ChatGPT, companies around the world are pushing for new uses of generative AI. Consumers—although often skeptical—are also excited about new ways to harness the power of AI both for work and entertainment.  

While most of the conversation about the latest AI surge centers around the technology needed to power resource-intensive models, experts fear that power itself might be the problem.  

With surging demand for artificial intelligence, chip industry leaders believe the massive data centers power the technology could lead to an energy crisis. Indeed, with the power demand of data centers already outgrowing the limit of what they can draw from public utilities, addressing the energy needs of AI is an urgent problem. Since avoiding AI isn’t an option in today’s demanding market, more efficient chip infrastructure will be needed to avert a crisis.  

Growing Power Demands

The latest concerns for an AI power crisis come from Ampere Computing founder and CEO Renee James. The former Intel executive and chip industry veteran said in a recent statement to Bloomberg, “Who doesn’t love AI? But we cannot continue to throw power at it. It just doesn’t work.”  

While ChatGPT and other AI models seem capable of answering questions on just about any topic, their own energy demands remain shrouded in mystery. Indeed, when asked about its emissions, the OpenAI model responds with something like “As an AI language model, I don’t directly consume energy,” or “My energy consumption is related to the servers used to host and run my model.” The same is true of Google’s Bard, which often answers with “My carbon footprint is zero.”  

Obviously, the data centers used to run powerful AI applications such as these require tremendous amounts of energy. But figuring out exactly how much energy is used, and the amount of emissions being created, is more difficult. This is doubled by the fact that many companies offer only vague details about their data centers and supercomputers.  

Brooklyn-based AI company Hugging Face recently tried to dig deeper, publishing a research paper on the emissions created by its Bloom supercomputer. The team found that its machine was responsible for about 50 metric tons of carbon dioxide emissions. That’s roughly equivalent to 60 transatlantic passenger jet flights.  

ChatGPT’s footprint is even larger. Limited public data suggests roughly 500 metric tons of carbon dioxide were produced to train the GPT3 model—not to mention its widespread public and private use afterward.  

Troublingly, the larger and more complex AI models get, the more power they demand. In turn, they also create more emissions.  

Boosting Efficiency

For many years, the world of data centers has operated with the mindset of improving performance at any cost. Indeed, as tech firms ramped up their digital operations, massive performance boosts were essential to keeping up with both consumer demand and the competition.  

Now, though, facing the golden age of AI, a mindset shift is needed. Ampere’s James argues that the chip industry needs to focus more intently on power consumption. Though performance improvements will always be needed, keeping power demand flat, she says, is the only way to meet the needs of data center clients moving forward.  

Notably, James’ company is making strides in this direction with its latest lineup of chips, dubbed AmpereOne. The company’s previous version is already in-use by giants like Microsoft and Google for their data centers.  

AmpereOne aims to improve efficiency and is the company’s first product to use in-house designs. The top-end model features 192 cores, which consume less power while completing calculations simultaneously. TSMC is responsible for making the new chips using its 5-nanometer process.  

Still, AmpereOne demands 350 watts of power, easily doubling or tripling the requirements of most consumer devices. But James says that just 36 of its microprocessors match the performance of 54 AMD chips or 82 Intel chips.  

AmpereOne will put pressure on Intel and AMD to keep pace if they want to retain their clients in the data center market. Of course, these two are also working to up their core counts and bolster the efficiency of their latest product lines.  

As AI continues to grow in popularity, advances like this one will become even more essential. For the chip industry, maximizing efficiency while continuing to improve performance is a key focus area for the coming years.

Author of article
linkedin logox logofacebook logoinstagram logo