Most major AI Model Producers score D or worse on FLI's 2025 AI Safety Index for Existential Threat
Evidence Summary:
- The Future of Life Institute’s Summer 2025 AI Safety Index reports that none of the evaluated companies scored above a D in the Existential Safety planning domain, indicating all received grades of D or lower【2025/ai-safety-index-summer-2025】.
- An independent article from Quantum Zeitgeist echoes this finding, stating that “No company achieved a score above a D in this domain for the second consecutive edition”【2025/ai-safety-future-of-life-institute-risk-mitigation】.
- The index evaluates seven leading AI developers, including Anthropic, OpenAI, Google DeepMind, Meta, and others, which constitute the major AI model producers referenced in the claim【2025/ai-safety-index-summer-2025】.
Conclusion: The claim that most major AI model producers score D or worse on the FLI 2025 AI Safety Index for Existential Threat is directly supported by both the primary FLI report and independent reporting, confirming that all assessed major producers received grades of D or lower.
Sub‑question coverage:
- sq1 (methodology) and sq2 (included companies) are addressed by the FLI report【2025/ai-safety-index-summer-2025】.
- sq3 (individual scores) is covered by the same report and the independent article【2025/ai-safety-index-summer-2025】【2025/ai-safety-future-of-life-institute-risk-mitigation】.
No contradictory evidence was found.
Sources
Criterion: Most major ENTITY score D or worse on FLI's 2025 AI Safety Index for Existential Threat
Review cadence
This claim is reviewed every 60 days.