BETA RELEASE

Summary

A primer on the 'if-then commitments' framework, which links specific AI capability thresholds to required risk mitigations to prevent catastrophic security risks without slowing innovation.

Key quotes

If an AI model has capability X, risk mitigations Y must be in place. And, if needed, we will delay AI deployment and/or development to ensure the mitigations can be present in time.

The piece discusses how AI developers and regulators can use ‘tripwire capabilities’ to trigger mandatory safety and security measures. It compares frameworks from Google DeepMind, OpenAI, and Anthropic.