BETA RELEASE

Summary

OpenAI outlines its AI safety strategy, covering testing, real‑world learning, child protection, privacy, factual accuracy, and collaboration with regulators and stakeholders.

Key quotes

Ensuring that AI systems are built, deployed, and used safely is critical to our mission.
For example, after our latest model, GPT‑4, finished training, we spent more than 6 months working across the organization to make it safer and more aligned prior to releasing it publicly.
We require that people must be 18 or older—or 13 or older with parental approval—to use our AI tools.
GPT‑4 is 82% less likely to respond to requests for disallowed content compared to GPT‑3.5.
GPT‑4 is 40% more likely to produce factual content than GPT‑3.5.

The page details OpenAI’s multi‑layered approach to AI safety, including rigorous testing, gradual deployment, and ongoing research. It also describes specific safeguards such as child protection measures and privacy practices.