The rise of deepfake technology has made it disturbingly easy to create fake nude images of people without their consent. These images, often AI-generated, are increasingly used to harass, intimidate, or silence victims—especially women. But a powerful new law just changed the rules. The Take It Down Act, signed by President Trump, now requires platforms to remove non-consensual intimate images, including deepfake nudes, within 48 hours of a valid report.
This groundbreaking legislation responds directly to the growing threat of deepfake pornography and sets a firm legal framework to protect victims—especially minors and professional women—against the rapid spread of AI-generated abuse.
The Take It Down Act is a bipartisan law that forces online platforms to act quickly when non-consensual sexually explicit content is reported. Under this law:
Platforms must remove flagged content within 48 hours of receiving a legitimate request.
Individuals who post such content can face fines or up to three years in prison.
Platforms that don’t comply may be held accountable by the Federal Trade Commission.
The law was inspired by real stories like that of Ellison Berry, a student whose classmate used a mobile app to generate and distribute fake explicit images of her. After struggling to get Snapchat to take the content down, her family reached out to Senator Ted Cruz, who co-sponsored the bill with Senator Amy Klobuchar. The bill passed Congress with near-unanimous support—a rare show of unity for a very urgent issue.
Deepfake nudes are often labeled as “revenge porn,” but that term doesn’t capture the full scope of the problem. Victims are frequently targeted by strangers using AI-powered tools that turn ordinary images—such as social media selfies—into pornographic fakes.
Recent surveys show:
10% of young people know someone who’s been targeted.
**6% have experienced it themselves.
Victims often suffer anxiety, depression, and even suicidal thoughts.
While teens are particularly vulnerable, professional women are being attacked at alarming rates, often to silence or discredit them. As Omny Miranda Martone, CEO of the Sexual Violence Prevention Association, explains, “job loss due to non-consensual explicit materials, including deepfakes, is far more common than the public realizes.”
Deepfake pornography is more than harassment—it’s a tool for sabotage. Martone outlines three ways workers are commonly targeted:
Sent directly to a boss to get someone fired.
Posted online, making it appear in search results.
Shared among coworkers, creating a hostile work environment.
Even Martone herself became a victim. A deepfake nude of her was shared on social media and emailed to her employer in an attempt to destroy her reputation. Ironically, the perpetrator didn’t know she was the CEO.
Women in public life are especially at risk. A study by the American Sunlight Project found:
35,000 deepfake pornographic images of U.S. members of Congress.
25 out of 26 victims were women.
Women lawmakers are 70 times more likely to be targeted than men.
In the UK, over 30 politicians had images of them altered and shared on a deepfake website that received more than 12.6 million visitors in three months.
As MP Stella Creasy put it, this isn’t about sex—it’s about power and control.
Teachers also face real-world consequences. One small-town educator lost her job when parents saw a deepfake porn image online. Despite the content being fake, the damage was irreversible. Many parents, not understanding the technology, simply demanded she be removed.
Nina Jankowicz, former head of the DHS Disinformation Governance Board, described her experience in The Atlantic:
“For more than a year, I have been the target of a widespread online harassment campaign, and deepfake porn... has become a prized weapon in the arsenal misogynists use to try to drive women out of public life.”
Even celebrities aren’t spared. In early 2024, deepfake images of Taylor Swift went viral on social media, gaining millions of views before they were removed. Activist Laura Bates called it “a new way of controlling women,” pointing out that the message is clear: no matter how powerful or successful a woman is, she can be degraded and objectified at will.
The Take It Down Act is a major step toward reclaiming control in a digital age where technology is too often weaponized. Now, platforms are legally required to:
Clearly outline how users can report deepfakes or explicit non-consensual imagery.
Act within 48 hours to take down the content.
Face consequences if they fail to protect users.
Though 48 hours may feel like a lifetime for victims, the law at least gives a clear path forward—and real consequences for bad actors.
As deepfake technology continues to evolve, strong legal protections are essential. The Take It Down Act offers a hopeful start. It recognizes the deep emotional and professional harm caused by non-consensual deepfake nudes and sends a strong message: this abuse won’t be tolerated.
If you or someone you know is targeted, act quickly. Document everything, report the abuse using the platform’s outlined procedures, and seek legal help if needed. You’re not alone, and now—thanks to this law—you have the law on your side.
Want to learn more about online safety, AI misuse, or digital rights? Explore our related posts, share this article to raise awareness, or leave a comment below. Let’s keep the conversation going—and the pressure on.
𝗦𝗲𝗺𝗮𝘀𝗼𝗰𝗶𝗮𝗹 𝗶𝘀 𝘄𝗵𝗲𝗿𝗲 𝗿𝗲𝗮𝗹 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁, 𝗴𝗿𝗼𝘄, 𝗮𝗻𝗱 𝗯𝗲𝗹𝗼𝗻𝗴. We’re more than just a social platform — from jobs and blogs to events and daily chats, we bring people and ideas together in one simple, meaningful space.