OpenAI launches Safety Fellowship pilot - Applications now open

OpenAI has launched a pilot Safety Fellowship for outside researchers, engineers, and other specialists focused on AI safety and alignment. According to the company, the fellowship will run from September 14, 2026, through February 5, 2027, and is designed to support high-impact research focused on the safety of current and future advanced AI systems.
OpenAI opens applications for a new safety research program
In its announcement, OpenAI said the program is focused on independent work in areas such as safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. The company said it is especially interested in projects that are technically strong, empirically grounded, and useful to the wider research community.
OpenAI said fellows will work with company mentors and alongside a peer cohort during the program. Workspace will be available in Berkeley through Constellation, although the fellowship will also support remote participation. By the end of the program, fellows are expected to produce a substantial output such as a paper, benchmark, or dataset. OpenAI also said the fellowship includes a monthly stipend, compute support, mentorship, and API credits, while noting that participants will not receive internal system access.
Fellowship is open to applicants from multiple disciplines
The company said it is welcoming applicants from backgrounds that include computer science, social science, cybersecurity, privacy, and human-computer interaction, among related fields. OpenAI added that it plans to prioritize research ability, technical judgment, and execution over formal credentials alone, and that letters of reference will be required as part of the application process.
Applications are now open and will close on May 3, 2026, with successful applicants set to be notified by July 25, 2026, according to the announcement. The fellowship adds another formal safety-focused initiative to OpenAI’s public research and policy efforts as the company continues to frame alignment and misuse prevention as central issues for advanced AI development.








