OpenAI released Sora 2, its latest AI video generation model, on September 30, 2025—and within hours, users were creating videos featuring Mickey Mouse, Pokémon characters, and scenes from major Hollywood productions. The launch has ignited what legal experts are calling one of the most significant copyright controversies in AI history.
The platform’s approach to intellectual property has sent shockwaves through the entertainment industry: OpenAI told talent agencies and film studios that Sora 2 would generate videos featuring copyrighted material unless copyright holders actively opt out.
“If OpenAI adopts an aggressive stance allowing outputs of your copyrighted work unless you opt out, that seems unlikely to hold up legally,” said Mark McKenna, a legal expert on intellectual property. “Copyright law doesn’t work that way. You aren’t required to opt out of someone else’s rules.”
What Sora 2 Can Do
Sora 2 represents a significant leap beyond its predecessor, generating videos with synchronized audio including dialogue, sound effects, and ambient sound. The model maintains consistent physics across multi-shot sequences and produces content in realistic, cinematic, and animated styles.
The platform launches alongside a TikTok-style social media app featuring an algorithmic feed where all content is AI-generated. Users can insert themselves into videos through a “Cameo” feature. Initially available through invite-only access in the United States and Canada, the platform prioritizes ChatGPT Pro subscribers.
A Copyright Infringement Machine?
Since launch, users have successfully generated videos featuring:
- Disney characters including Mickey Mouse and Elsa from Frozen
- Popular TV shows like Rick and Morty, South Park, and Family Guy
- Video game properties including Pokémon and Grand Theft Auto
- Major film franchises from Star Wars to Marvel
404 Media described Sora 2 as a “Copyright Infringement Machine,” documenting content ranging from Nazi-themed SpongeBob videos to criminal depictions of Pikachu. The platform’s content filters appear ineffective at preventing generation of recognizable copyrighted characters.
Hollywood Reacts
Major entertainment studios view the technology as an existential threat. Disney has already opted out of having its content appear on the platform, while other studios are reportedly considering legal action.
“They’re turning copyright on its head,” said copyright attorney Rob Rosenberg. Ed Klaris, an intellectual property attorney at Columbia Law School, noted that OpenAI’s approach contradicts what copyright law is meant to protect.
The opt-out framework puts studios in a reactive position—they must continuously monitor for their intellectual property and request removal, rather than having their rights respected by default. While major studios may have resources to navigate this system, independent creators and smaller production companies face a practical impossibility in monitoring and protecting their work.
Already Facing Legal Battles
Sora 2 launches into a legal landscape already dominated by AI copyright litigation. More than 25 copyright infringement lawsuits currently sit in federal courts against major AI companies, including several targeting OpenAI itself.
Key ongoing cases include:
- The New York Times v. OpenAI & Microsoft, claiming unauthorized use of millions of articles for training
- Disney & Universal v. Midjourney, the first major Hollywood studio lawsuit against an AI company for generating copyrighted characters
- Multiple authors’ class action lawsuits against OpenAI, Meta, and Anthropic
The first half of 2025 produced conflicting court decisions that demonstrate how unsettled AI copyright law remains. In February, a Delaware federal court ruled against an AI company in Thomson Reuters v. Ross Intelligence, finding that using copyrighted material to train AI constituted infringement. But months later, California courts ruled in favor of Meta and Anthropic in separate cases, finding certain types of AI training qualified as fair use.
The Fair Use Question
AI companies consistently invoke fair use as their primary defense, but legal experts say that doctrine may not protect the kind of output Sora 2 produces.
While courts remain divided on whether AI training itself constitutes fair use, the generation of recognizable copyrighted characters presents a different legal challenge. When Sora 2 generates a video of Mickey Mouse, legal experts argue that’s not transformative use—it’s reproducing a copyrighted character for entertainment purposes, precisely the kind of use copyright is meant to protect against.
“The training process itself might qualify as fair use under some interpretations,” said one intellectual property attorney. “But the generation of copyrighted content likely doesn’t. That distinction matters.”
What Comes Next
Congress has begun addressing AI copyright issues through proposed legislation, including the Generative AI Copyright Disclosure Act of 2024, which would require companies to disclose datasets used for training AI models. But no legislation has passed, and political gridlock makes prospects uncertain.
Legal experts predict Sora 2’s opt-out approach will face swift legal challenges. Copyright holders can currently request opt-out status through OpenAI’s copyright disputes page, issue DMCA takedown notices for specific infringing content, or pursue litigation.
“This could be a turning point,” said one copyright attorney specializing in AI cases. “When the system generates recognizable copyrighted characters doing things their copyright owners never authorized, that’s not transformation—it’s reproduction and derivative work creation, both core rights that copyright owners control.”
OpenAI has not disclosed the full training dataset for Sora 2, making it impossible for creators to know whether their work was used without permission. The company maintains that its training data consists of publicly available and licensed content but refuses to provide detailed breakdowns.
The coming months will reveal whether OpenAI’s approach survives legal scrutiny or whether courts will reject what critics call a fundamental misunderstanding of how copyright law works. Either outcome will significantly shape how AI companies approach copyrighted material going forward.