BETA RELEASE

Summary

OpenAI has introduced a six-month Safety Fellowship to fund external researchers studying AI risks, focusing on agentic oversight and high-severity misuse domains.

Key quotes

OpenAI's new Safety Fellowship funds external researchers to study AI risks, signaling a move to broaden participation in alignment and safety work.
priority areas for its fellowship include "agentic oversight" and "high-severity misuse domains," reflecting concerns about systems capable of taking multi-step actions

The program runs from September 2026 to February 2027. It provides external researchers with stipends, model access, and technical support to study alignment and safety.