Find helpful tutorials or share inspiring use cases on AI technology for higher education.

Shared content may not reflect the policies of Tilburg University on the use of AI. 

LLMs Environmental Impact: Are LLMs bad for the environment? 

Currently, Yes.

Large Language Models (LLMs), which form the core of generative AI applications such as GPT-4, require a significant amount of energy for their training, development, and scaling processes. This high energy consumption can result in considerable environmental impacts, including increased carbon emissions. In this article, we aim to acquaint you with some respects regarding the energy usage of LLMs and LLMs environmental impact.

Efficiency of LLMs Compared to Specialized Technologies

LLMs like the GPT series are versatile, and designed to handle a broad array of tasks such as answering questions, creating text, and more. However, when deployed for specific purposes, such as conducting web searches, LLMs may not be as efficient as specialized technologies. For instance, traditional web search engines are optimized explicitly for searching, making them faster and more energy-efficient for this particular function compared to LLMs.

Did you know that a single ChatGPT question consumes more energy than a Google Search? The energy consumption of a Google search is around 0.0003 kWh. A ChatGPT-4 interaction can consume anywhere from 0.001 to 0.01 kWh, depending on the model size and tokens used. That means a single ChatGPT interaction uses 15 times more energy than a Google search. To put the numbers in perspective, a 60W light bulb uses up 0.06kWh in an hour.

The Carbon Footprint of LLMs.

There has been significant concern about the carbon footprint of LLMs, which refers to the total greenhouse gas emissions associated with their training and operation. Initial estimates of this footprint, such as those by Strubell in 2019, were quite pessimistic and generated alarming headlines. However, newer estimates, such as those discussed in a recent study by Faiz (2024), provide a more accurate and nuanced view of the environmental impact. Read the full paper here.

Interested in learning more about how to reduce your AI’s energy use and environmental impact? Discover actionable tips in our guide on reducing AI’s footprint here.”

Factors Influencing LLMs’ Carbon Footprint.

The carbon footprint of LLMs can vary significantly depending on several factors:

  • Model Size:  The amount of power consumed by the current generation of LLMs is associated with the size of the data sets they are trained on. An LLM’s size can be characterized in part by the number of parameters used in its inference, operations. More parameters mean more data to move around and more computations to make use of that data. The largest and most advanced LLMs, designed to outperform competitors, are highly carbon-intensive. The difference in energy consumption between large and smaller, more efficient LLMs can be drastic, potentially by a factor of 100. However, there is also a therefore a push to create efficient “small” `LLMs`, even ones that could run on your own computer. 
  • Energy Source: The environmental impact also depends heavily on the cleanliness of the energy powering the data centers where these models run. Clean, renewable energy sources can drastically reduce the carbon footprint, much like the difference in carbon emissions between taking a flight versus a train.
  • Retraining: In addition, many deployed models can only be used for a short time — weeks or months — before the model needs to be retrained. Addressing the problem of model drift requires repeating steps from the original training process and consuming a similar amount of power.

Just like the decision between taking a flight or a train can lead to vastly different carbon footprints, understanding and considering these “multipliers” is crucial when evaluating LLM usage. For more insights, read about the environmental impact of AI models in this article.

Transparency and Responsible Use

Not all LLM developers are transparent about their models’ environmental impact. For example, the developers of GPT-4 have not disclosed detailed information about the carbon footprint of their models. It is recommended to use LLMs that provide clear and specific data on their environmental impact to avoid incentivizing non-transparent and potentially harmful practices.

“I would recommend only using LLMs that make clear numerical statements about  their carbon footprint. We must not incentivize bad practices.” (Dan Stowell, CSAI – TiU, May 2024)