BETA RELEASE

Summary

The article quantifies carbon emissions of AI token generation, outlines training and inference footprints of major LLMs, and offers strategies to reduce emissions.

Key quotes

Every time you prompt an AI and it generates a word, there's an invisible cost paid in carbon.
A single ChatGPT request emits roughly 4.32 g CO₂ on average.
60–90 % of an LLM's life-cycle emissions come from inference, not training.
OpenAI GPT‑4 (2023) … 12 000–15 000 t CO₂ (estimate via analysis of leaked compute).
Rule #1: Use the smallest model that meets the task's quality bar.

The post, published by DitchCarbon, presents data on CO₂ per token for various models, compares training emissions, and recommends low‑carbon model choices and efficiency practices for developers.