BETA RELEASE

Summary

OpenAI is launching a six-month Safety Fellowship to fund external researchers studying AI risks, specifically focusing on agentic oversight and high-severity misuse domains.

Key quotes

The new Safety Fellowship funds external researchers to study AI risks, signaling a move to broaden participation in alignment and safety work.
OpenAI said the priority areas for its fellowship include “agentic oversight” and “high-severity misuse domains,” reflecting concerns about systems capable of taking multi-step actions with limited human intervention.

The program will run from September 2026 to February 2027, providing stipends and model access to external researchers. It reflects a broader trend of AI labs like Anthropic and Google funding external safety research.