BETA RELEASE

Summary

A summary of the Future of Life Institute's Summer 2025 AI Safety Index, grading leading AI companies on safety, security, and existential risk planning.

Key quotes

The AI safety index is like a scorecard for companies trying to build powerful AI where a massive amount of information is first gathered about what they actually do and have done and then a bunch of independent experts review this and assign letter grades to them.
Companies claim they will achieve artificial general intelligence (AGI) within the decade, yet none scored above D in Existential Safety planning.
The obvious takeaway from this is that the self-governance just doesn’t work.

The post discusses the Summer 2025 AI Safety Index, noting that Anthropic received the highest overall grade (C+). It emphasizes the gap between company goals for AGI and their lack of actionable safety plans.