U.S. Congress Passes Take It Down Act to Combat AI-Generated Deepfakes and Revenge Porn
Image Credit: Ian Hutchinson | Splash
On April 28, 2025, the U.S. House of Representatives passed the Take It Down Act with a 409-2 vote, following unanimous Senate approval in February. This bipartisan legislation, now awaiting President Donald Trump’s signature, aims to address the growing issue of non-consensual intimate imagery (NCII), including AI-generated deepfakes and traditional "revenge porn". The bill criminalizes the publication of such content and mandates that online platforms remove it within 48 hours of a victim’s request. While hailed as a step toward protecting victims, the legislation has sparked concerns about potential censorship and First Amendment violations.
[Read More: The Deepfake Dilemma: Navigating the AI Apocalypse]
The Take It Down Act: Key Provisions
The Take It Down Act, formally known as the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, targets the non-consensual sharing of sexually explicit images and videos, with a particular emphasis on AI-generated deepfakes. These deepfakes use artificial intelligence to create realistic but fabricated intimate imagery of identifiable individuals, often causing significant emotional and reputational harm. The Act makes it a federal crime to knowingly publish or threaten to publish NCII, with penalties including up to two years imprisonment, and harsher punishments for cases involving minors.
Social media platforms and websites must remove flagged NCII within 48 hours of a victim’s notification and delete any duplicate content. This marks a rare federal intervention in regulating internet companies, as most existing laws on NCII are state-based, with 30 states explicitly addressing deepfakes. The Act aims to create a uniform federal standard to address inconsistencies in state laws and ensure faster content removal.
AI’s Role in the Legislation
The rise of AI technologies has amplified the NCII problem, enabling perpetrators to create highly convincing deepfake pornography with accessible tools. Victims, such as 14-year-old Elliston Berry, whose AI-generated deepfake was shared on Snapchat, have faced prolonged struggles to remove such content. The Act’s focus on AI-generated imagery reflects the urgency of addressing this technological threat, which has outpaced existing legal frameworks. By targeting deepfakes explicitly, the legislation acknowledges AI’s dual role as both a tool for abuse and a challenge for enforcement, given the difficulty in distinguishing real from fabricated content.
[Read More: The Alarming Surge of AI-Enhanced Image Abuse Among Youth]
Bipartisan Support and Advocacy
The Take It Down Act has garnered widespread bipartisan backing, led by Senators Ted Cruz (R-TX) and Amy Klobuchar (D-MN) in the Senate, and Representatives María Elvira Salazar (R-FL) and Madeleine Dean (D-PA) in the House. First Lady Melania Trump has been a prominent advocate, aligning the bill with her “Be Best” initiative focused on online safety. In March 2025, she hosted a Capitol Hill roundtable with victims, including Berry, emphasizing the emotional toll on teenagers, particularly girls. Her involvement was pivotal in expediting the bill’s House passage.
The legislation also draws support from tech companies like Meta, which owns Facebook and Instagram, and Snapchat, as well as advocacy groups and the Information Technology and Innovation Foundation. Meta’s spokesman, Andy Stone, noted the company’s commitment to preventing NCII, while Senator Klobuchar highlighted the Act’s role in empowering victims and holding perpetrators accountable. Personal testimonies, such as that of South Carolina state Representative Brandon Guffey, whose son died by suicide after a sextortion scam, underscored the human cost of NCII and bolstered bipartisan consensus.
[Read More: Florida Mother Sues Character.AI: Chatbot Allegedly Led to Teen’s Tragic Suicide]
Censorship and Free Speech Concerns
Despite broad support, the Take It Down Act faces criticism from digital rights groups, including the Electronic Frontier Foundation (EFF) and the Cyber Civil Rights Initiative, who argue its language is overly broad. Critics warn that the takedown provision could inadvertently censor lawful content, such as legal pornography, LGBTQ expression, or journalistic material like photos of public protests or law enforcement notices. The 48-hour removal deadline may force platforms to rely on automated filters, which often misflag legitimate content due to their lack of nuance.
The EFF contends that the Act lacks safeguards against bad-faith takedown requests, potentially enabling abuse to silence critics or remove consensual content falsely reported as non-consensual. Smaller platforms, with limited resources to verify content, may preemptively remove material to avoid legal risks, chilling free speech. Additionally, the requirement to monitor content could pressure platforms to scan encrypted communications, raising privacy concerns. These issues have led some, including Representative Thomas Massie (R-KY), who cast one of the two “no” votes, to call the Act a “slippery slope” for censorship.
[Read More: Grok 3 Controversy: xAI Faces Censorship Claims Over Musk, Trump]
Implications and Broader Context
The Take It Down Act represents a significant step in addressing AI-driven harms, particularly as deepfake technology becomes more accessible. Its passage marks the first major U.S. law targeting AI-generated NCII, setting a precedent for future AI-related legislation. However, its implementation will face challenges, including balancing victim protection with free speech and ensuring platforms can comply without over-censoring. The Act’s success will depend on clear enforcement guidelines and mechanisms to prevent misuse.
Other proposed AI bills, such as the NO FAKES Act and the Content Origin Protection and Integrity from Edited and Deepfaked Media Act, suggest Congress is increasingly focused on AI’s societal impacts. As AI tools evolve, lawmakers will need to refine legal frameworks to address emerging threats while safeguarding constitutional rights. The Take It Down Act’s near-unanimous passage reflects a rare moment of bipartisan unity, but its critics underscore the complexity of regulating technology in a polarized digital landscape.
[Read More: Safeguarding Identity: The Entertainment Industry Supports Bill to Curb AI Deepfakes]
Source: AP News, The Register, PetaPixel, US Congress