BETA RELEASE

Summary

Independent AI safety assessment of major companies reveals systemic unpreparedness, gap in risk management, and failing grades for some firms.

Key quotes

some companies are making token efforts, but none are doing enough… This is not a problem for the distant future; it’s a problem for today.
The AI industry is fundamentally unprepared for its own stated goals.
Only 3/7 firms report substantive testing for dangerous capabilities linked to large-scale risks such as bio- or cyber-terrorism.

Claims using this source

This post announces the release of the Future of Life Institute’s Summer 2025 AI Safety Index, an evaluation of seven major AI companies (OpenAI, Anthropic, Meta, etc.) by six independent experts. Key findings highlight a widening gap between AI capabilities and risk management, with notable deficiencies in testing for harmful capabilities and whistleblowing transparency. Anthropic scored highest (C+), while Zhipu AI and DeepSeek received failing grades.