BETA RELEASE

Summary

A collection of documented safety incidents, security vulnerabilities, and transparency issues at OpenAI, citing reporting from major news outlets and former employee testimonies.

Key quotes

Our research has documented various safety incidents, broken promises, and red flags within OpenAI's organizational culture.
OpenAI’s internal security was not prioritized. When I was at OpenAI, there were long periods of time where there were vulnerabilities that would have allowed me or hundreds of other engineers at the company to bypass access controls and steal the company’s most advanced AI systems, including GPT-4.

The page serves as a curated index of allegations and reports concerning OpenAI’s safety protocols, non-disclosure agreements, and security breaches. It aggregates evidence from sources including The New York Times, The Washington Post, and former employees.