BETA RELEASE

Summary

A report on the Future of Life Institute's AI Safety Index, evaluating eight AI companies' risk management and safety practices.

Key quotes

Existential safety remains a core structural failure across the industry, with no company demonstrating a credible plan to prevent catastrophic misuse or loss of control.
No company achieved a score above a D in this domain for the second consecutive edition.

Claims using this source

The article summarizes the Future of Life Institute’s AI Safety Index, which uses a GPA-style system to grade companies. It identifies Anthropic, OpenAI, and Google DeepMind as leaders, while noting systemic failures in existential safety across the board.