Bryan Cranston and SAG-AFTRA say OpenAI is taking their deepfake concerns seriously after the actor’s likeness appeared in AI-generated videos without consent. Following public criticism, OpenAI has reportedly strengthened its guardrails and clarified policies around its Sora 2 video platform, which sparked controversy for using celebrity likenesses.
Actors, studios, and agents voiced frustration after discovering their faces and voices used in OpenAI’s Sora 2 videos. Bryan Cranston, best known for Breaking Bad, said he never opted in to appear on the app — yet videos featuring him surfaced, including one showing him taking a selfie with Michael Jackson.
This incident triggered widespread outrage across Hollywood. SAG-AFTRA, along with major talent agencies like UTA, CAA, and ATA, urged OpenAI to address the misuse of digital likenesses immediately.
In a joint statement, Bryan Cranston and SAG-AFTRA said OpenAI is taking their deepfake concerns seriously, confirming that the company has “strengthened guardrails” around its opt-in policy for likeness and voice usage. OpenAI also expressed “regret for these unintentional generations,” acknowledging the harm caused to artists and performers.
However, OpenAI did not share specific technical details on how the policy will change or respond to further media inquiries. Still, its statement reaffirmed a broader commitment: “All artists, performers, and individuals will have the right to determine how and whether they can be simulated.”
While Cranston praised OpenAI’s willingness to act, he emphasized that this should be a turning point for the entire industry. SAG-AFTRA president Sean Astin echoed that sentiment, noting that policy changes alone aren’t enough. He called for legislative action to protect performers from what he described as “massive misappropriation by replication technology.”
Astin pointed to the proposed Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act), which would make it illegal to create or distribute unauthorized AI-generated likenesses. The union has lobbied for this bill, arguing that AI deepfakes threaten the creative economy and artists’ control over their image.
OpenAI’s latest move marks a step forward in addressing deepfake accountability. By taking Bryan Cranston and SAG-AFTRA’s deepfake concerns seriously, the company appears more open to collaboration with the entertainment industry. Yet, experts say transparency and enforcement remain key — not just promises.
The discussion around AI likeness rights continues to grow as more creators, actors, and musicians demand consent-based frameworks. With laws like the NO FAKES Act under review, OpenAI’s cooperation could set a new precedent for responsible AI use in entertainment.
Bryan Cranston and SAG-AFTRA say OpenAI is taking their deepfake concerns seriously — and that’s a promising sign for artists fighting to protect their digital identity. Whether these commitments will lead to lasting change depends on how fast companies, lawmakers, and unions turn dialogue into enforceable standards.
As AI continues to evolve, one thing remains clear: consent must be the cornerstone of creative technology.
𝗦𝗲𝗺𝗮𝘀𝗼𝗰𝗶𝗮𝗹 𝗶𝘀 𝘄𝗵𝗲𝗿𝗲 𝗿𝗲𝗮𝗹 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁, 𝗴𝗿𝗼𝘄, 𝗮𝗻𝗱 𝗯𝗲𝗹𝗼𝗻𝗴. We’re more than just a social platform — from jobs and blogs to events and daily chats, we bring people and ideas together in one simple, meaningful space.