Deepfake Disaster: How AI is Fueling Nonconsensual Synthetic Forgeries and What Companies Are (or Aren’t) Doing About It
Generative AI’s dark side: deepfake porn. From nudify apps to manipulated celebrity images, tech companies grapple with nonconsensual synthetic forgeries. As governments legislate and platforms enforce policies, the line between innovation and abuse remains blurred.

Hot Take:
Deepfakes: The uninvited guest that no one wants but everyone keeps talking about. It’s like someone gave Photoshop steroids and a sinister sense of humor. From tech giants to government bodies, everyone’s scrambling to put this genie back in the bottle—spoiler alert, it’s not going well.
Key Points:
- Deepfake porn is spreading like wildfire, affecting real lives, including those of young people.
- Legislation to combat deepfakes is piecemeal and inconsistent across different regions.
- Tech companies have varying policies, from strict bans to more lenient approaches.
- AI tools like Claude, developed by Anthropic, are prohibited from generating any NSFW content.
- Platforms like Apple, Google, and Meta are under scrutiny for their roles in distributing or hosting deepfake content.
Membership Required
You must be a member to access this content.