
Fighting for Stronger Protections Against AI Exploitation
Explicit deepfakes are a growing crisis, yet the legal framework to address them remains inadequate. Victims have little recourse, and perpetrators often operate with impunity. That’s why we at Girls for Algorithmic Justice are actively advocating for bipartisan legislation aimed at protecting victims of explicit deepfakes, strengthening digital privacy, and holding tech companies accountable for deepfake distribution. Below are the key U.S. bills we are working to get passed and how they contribute to the fight against explicit deepfakes.
Bills We Support
Take it Down Act (H.R. 1205 / S.4569)
Status: Introduced in the U.S. House of Representatives (Pending Committee Review)
The TAKE IT DOWN Act is designed to protect victims of non-consensual deepfake images and videos by ensuring swift removal mechanisms and legal recourse. The bill requires online platforms and content hosts to take down explicit deepfakes within 48 hours of a verified complaint, preventing prolonged harm to victims. It also introduces civil penalties for platforms that fail to comply, ensuring that companies take responsibility for content moderation.
How this helps: Victims often struggle to remove deepfake content from the internet, facing bureaucratic hurdles and unresponsive platforms. This bill would force online platforms to take victim reports seriously and remove harmful content quickly, reducing lasting harm.
DEFIANCE Act (H.R. 3489 / S.3696)
Status: Introduced in the U.S. House of Representatives (Pending Committee Review)
The DEFIANCE Act (Dismantling Exploitative Fake Image Abuse and Non-Consensual Exploitation) seeks to criminalize the creation and distribution of non-consensual deepfake images and videos at the federal level. Under this bill, anyone found guilty of producing, distributing, or knowingly sharing an explicit deepfake without the depicted person’s consent would face significant legal penalties.
​
How this helps: Currently, only a handful of states have laws targeting explicit deepfakes, and enforcement is inconsistent. The DEFIANCE Act would establish nationwide protections, ensuring that deepfake abuse is recognized as a serious crime with enforceable consequences. This would also create civil recourse for victims.
Algorithmic Accountability Act (H.R. 3489 / S.3696)
Status: Reintroduced in the U.S. Congress (Pending Committee Review)
​
The Algorithmic Accountability Act aims to regulate AI systems that impact people’s rights, safety, and economic opportunities. It would require companies using AI-powered decision-making tools to conduct bias and impact assessments to ensure their systems do not reinforce discrimination. This includes auditing AI models that generate deepfakes and mandating greater transparency from tech companies regarding their AI tools.
​
How this helps: While the bill is not specific to deepfakes, it establishes stronger oversight of AI-generated content, including harmful applications like non-consensual deepfakes. By enforcing algorithmic transparency and bias audits, this bill could push tech companies to implement better safeguards against the misuse of AI for explicit content generation.