OpenAI has officially unveiled Sora 2, its latest flagship model for video and audio generation, positioning it as a major leap forward in AI-powered content creation. The new model offers more realistic physics, greater controllability, and introduces synchronized dialogue and environmental sound effects, bringing a new level of immersion to AI-generated content. Sora 2 debuts alongside a new social app, Sora, aimed at transforming how people interact with AI-generated media. The rollout begins today, September 30, 2025, in the U.S. and Canada.
OpenAI describes the original Sora model (released in early 2024) as a pivotal step for generative video—similar to GPT-1’s impact on natural language processing. Sora 2, by contrast, is described as reaching a “GPT‑3.5 moment” for video, setting a new bar for AI’s understanding of physical reality and world simulation.
According to OpenAI, Sora 2 represents a step change in “world simulation” capability. Whereas earlier systems often bent reality to satisfy a prompt, Sora 2 is designed to model plausible outcomes—including misses and rebounds in a basketball scene—rather than teleporting objects to fit the script. The model can now render scenarios that have historically stumped previous systems, such as Olympic-level gymnastics routines, accurate backflips on water that model the dynamics of buoyancy and rigidity, or even a triple axel while a cat clings on for dear life. These advances also extend to the system’s audio: Sora 2 generates background soundscapes, synchronized speech, and sound effects that align seamlessly with on-screen action, adding a new dimension of realism.
A highlight feature is the ability to “upload yourself.” Users can record a short video and audio sample, allowing Sora 2 to insert them (or friends, animals, and any real-world objects) into any generated scene with highly accurate visual and voice representation.
To demonstrate Sora 2’s capabilities, OpenAI is launching a new iOS app called Sora. The app enables users to create and remix short videos, discover community content in a customizable feed, and star in AI-generated scenes through the “cameos” feature. Cameos require a brief verification process to ensure user control and consent, with full options for privacy and content removal.
OpenAI positions the new Sora app as creation-first rather than engagement-maximized. The default feed prioritizes people you follow and content likely to inspire your own creations; the company says it is not optimizing for time spent. Teen accounts get daily feed-view limits and stricter cameo permissions, while parental controls (via ChatGPT) allow overrides like disabling algorithmic personalization or limiting infinite scroll. OpenAI is also scaling up human moderation to address bullying and other abuse risks.
Sora 2 is rolling out today in the U.S. and Canada, with plans to expand to additional regions. The app launches as invite-only to encourage collaborative use among friends. The model is initially free to use, with “generous limits,” subject to compute constraints. ChatGPT Pro users can access the high-quality Sora 2 Pro model first via ChatGPT, with support in the Sora app coming soon. OpenAI also plans to release Sora 2 via API for broader integration. Content generated with Sora 1 Turbo will remain accessible in users’ libraries.
OpenAI frames Sora 2 as an important step toward the development of general-purpose world simulators and, eventually, advanced robotics. The company sees improvements in video-native AI models as a pathway not just for creative tools, but for systems that can better understand, interact with, and eventually function within the physical world.