Now Reading
Congress Just Passed a Law to Protect People From Deepfakes and Revenge Porn

Congress Just Passed a Law to Protect People From Deepfakes and Revenge Porn

In a rare show of bipartisan agreement, the U.S. House of Representatives passed the Take It Down Act on Monday in a 409-2 vote. The bill, which aims to combat the spread of non-consensual intimate images—including AI-generated deepfakes and revenge porn—had previously passed the Senate unanimously and is now headed to President Trump’s desk for a signature.

The legislation requires websites and platforms to remove explicit images of individuals that were shared without their consent within 48 hours of a victim’s report. The law includes content generated by artificial intelligence and is enforced by the Federal Trade Commission using its existing authority over deceptive and unfair business practices. The bill does not modify Section 230 of the Communications Decency Act, the law that typically shields platforms from liability for user-generated content.

Supporters of the bill say it addresses the increasing accessibility and abuse of AI tools that can fabricate explicit images, often targeting women and minors.

“The Take It Down Act is a powerful statement that we stand united in protecting the dignity, privacy and safety of our children,” said First Lady Melania Trump, who has supported the measure as part of her “Be Best” initiative focused on children’s online safety.

The bill was co-authored by Senators Amy Klobuchar (D-MN) and Ted Cruz (R-TX), who both cited the rise in AI-enabled exploitation as a primary concern.

“By requiring social media companies to take down this abusive content quickly, we are sparing victims from lasting trauma,” Cruz said.

Survivors and advocates also played a key role in advancing the legislation. Teenagers like Elliston Berry and Francesca Mani, who spoke publicly about their experiences with AI-generated explicit images, were credited by lawmakers as helping build urgency and consensus around the bill.

“We weren’t just stories—they saw our humanity,” Mani told reporters following the vote.

Major tech companies including Meta, Snap and Google expressed support for the bill. Their backing was widely attributed to the bill’s decision not to alter Section 230 protections.

However, the legislation has raised concerns among some digital rights organizations. The Electronic Frontier Foundation and Fight for the Future argue that the 48-hour removal requirement may pressure smaller platforms to adopt automated content moderation tools that could lead to errors and false removals. Additionally, there is concern over users abusing the law to simply remove content they dislike or disagree with, hindering free speech on social platforms.

“If only Senator Cruz or anyone on the Hill had taken the time to make a few minor corrections, this law could’ve avoided some unintended consequences,” said Lia Holland, a spokesperson for Fight for the Future.

Despite those concerns, the law marks one of the most significant federal efforts to address AI-enabled sexual exploitation and image-based abuse.

President Trump is expected to sign the bill into law later this week.

© 2023 RELEVANT Media Group, Inc. All Rights Reserved.

Scroll To Top