Big tech companies commit to new safety practices for AI
Summary
Sixteen AI developers, including Microsoft and OpenAI, signed the Frontier AI Safety Commitments to implement safety frameworks and risk thresholds for AI models.
Key quotes
The signatories agreed to publish — if they have not done so already — safety frameworks outlining on how they will measure the risks of their respective AI models.
The article discusses the Frontier AI Safety Commitments and compares them to the Bletchley Declaration and the EU AI Act. It highlights the voluntary nature of these organizational commitments.