Corporate Consent and the Future of Intellectual Property
- Zhena Omojola

- 2 days ago
- 4 min read
Introduction
Disney has long been a strong defender of their intellectual property (IP). For corporate works, its private copyright protections typically last up to 95 years from publication, allowing the company to maintain control over its characters and stories for generations. In contrast, the United States public domain consists of creative works, inventions, and information that are no longer protected by intellectual property law. Material in the public domain can be freely used, adapted, and built upon without permission because no individual or entity holds exclusive copyright over it.
The Disney character Steamboat Willie entered the public domain in 2024. Originally released in 1928, it was expected to enter the public domain much earlier, but its copyright term was extended multiple times through legislation strongly supported by Disney, often referred to as the “Mickey Mouse Protection Act.” For decades, Disney’s legal strategy centered on prolonging exclusivity and maintaining strict control over how its creative properties could be used.
Against this historical backdrop, Disney’s recent agreement with OpenAI represents a notable shift. Rather than exclusively guarding access to its intellectual property, Disney is now engaging with a technology built on generative systems that rely on large-scale data use and creative recombination. This move signals a shift in how consent operates in the digital age, moving authority away from individual creators and toward corporate entities, and in doing so reshaping intellectual property law, moral rights, and the future of media production. This distinction highlights the stakes of Disney’s legal strategy and raises an important question: what happens when a corporation defined by protecting intellectual property collaborates with an AI company whose innovation depends on generating new content from existing media?
The Disney and Open AI Deal
In December 2025, The Walt Disney Company announced a landmark three-year agreement with OpenAI, including a reported $1 billion investment and a licensing arrangement permitting more than 200 characters from Disney, Marvel, Pixar, and Star Wars to be integrated into OpenAI’s video-generation model, Sora. The partnership allows users to create short AI-generated videos on Sora featuring licensed characters, with select content expected to appear on Disney+ beginning in early 2026.
Historically, Disney has maintained an aggressive stance toward protecting its intellectual property, frequently initiating litigation against individuals and corporations accused of unauthorized use of its copyrighted characters. Most recently, the company filed a lawsuit against the AI developer Midjourney, alleging that characters such as Darth Vader and Elsa were used without authorization to train generative AI models.
Against this backdrop, the OpenAI agreement represents a notable shift in strategy. Rather than resisting artificial intelligence through litigation alone, Disney has moved toward a model of managed collaboration, using licensing and contractual control to shape how its intellectual property functions within emerging AI technologies.
Corporate Consent vs. Individual Consent
Corporate consent refers to permission granted by a legal intellectual property owner, usually a corporation, rather than by the individual creators or performers behind a work. This differs from traditional ideas of creative authorship. Writers, animators, voice actors, and even cultural communities all contribute deeply to the development and meaning of iconic characters, yet legal ownership ultimately belongs to companies like Disney.
This dynamic raises important moral rights concerns because creators often have little control over how their work is altered, reinterpreted, or reproduced through AI systems. As a result, AI-generated content can be legally authorized while still ethically troubling. For example, after the release of Sora 2, users created videos depicting Martin Luther King Jr. in racist and degrading scenarios that quickly spread across social media.
Although OpenAI later restricted the use of King’s likeness within the platform, the harm had already occurred. Situations like this reveal how corporate control over intellectual property can override individual consent, leaving creators and cultural subjects with little power over how their identities and work are used by AI systems.
Impact on Artists Rights and Moral Rights
This deal has significant implications for voice actors whose likenesses and performances inform beloved characters, writers whose narrative structures shape the stories audiences know and love, and cultural communities whose symbols and histories are embedded within Disney properties.
The potential consequences are considerable. AI-generated images already occupy an enormous share of online media, and many viewers struggle to distinguish between fabricated content and reality. AI generated variations of Disney characters could dilute original artistic intent, while the creators who helped bring these characters to life may receive no compensation from AI produced outputs.
These concerns reflect broader industry debates surrounding artificial intelligence and creative labor. Recent entertainment industry negotiations have increasingly focused on AI protections, most notably during the recent SAG and Writers Guild strikes, where performers and writers raised concerns about digital replication.
As a result, there may be growing pressure to include AI-specific clauses in future contracts. Without stronger moral rights protections or targeted AI legislation, corporate licensing agreements may allow legal ownership to override the rights and recognition of the creators and communities whose labor sustains these cultural works.
Precedent for Other Media Companies
This deal could encourage other studios, such as Warner Bros., Universal, and Netflix, to pursue similar agreements with artificial intelligence companies. As a result, private AI licensing arrangements for proprietary characters may become the industry norm. Such partnerships could signal a shift away from litigation toward negotiated agreements that allow corporations and technology companies to collaborate under mutually favorable terms.
However, the risks of this trend are significant. Power may become increasingly consolidated among a small number of major corporations, while independent creators, whose work continues to drive innovation and creativity, may lack comparable bargaining power over how their creations are used or reproduced through AI systems.
To address these concerns, policymakers and industry organizations could establish AI-specific contractual standards requiring creator consent, attribution, and compensation when AI systems generate derivative works. Strengthening moral rights protections alongside collective bargaining frameworks would help ensure that collaboration between studios and AI companies does not come at the expense of creative labor.
Image Source: LA Times Business




Comments