AI News

Meta's platforms showed hundreds of "nudify" deepfake ads, CBS News investigation finds - CBS News

A CBS News investigation revealed that Meta's platforms, including Instagram, hosted hundreds of advertisements promoting AI tools used to create sexually explicit "nudify" deepfakes. These…

Meta's platforms showed hundreds of "nudify" deepfake ads, CBS News investigation finds - CBS News

Jun 7, 2025

Meta's platforms showed hundreds of "nudify" deepfake ads, CBS News investigation finds - CBS News

A CBS News investigation revealed that Meta's platforms, including Instagram, hosted hundreds of advertisements promoting AI tools used to create sexually explicit "nudify" deepfakes. These…

A CBS News investigation revealed that Meta's platforms, including Instagram, hosted hundreds of advertisements promoting AI tools used to create sexually explicit "nudify" deepfakes. These ads often showcased the ability to generate nude images of real people by uploading photos, with some even redirecting users to websites or app stores where these tools were available for download.

Meta has since removed the ads, deleted the associated pages, and blocked the URLs, citing their policies against non-consensual intimate imagery. The ads were specifically targeted to men between the ages of 18 and 65 in the United States, European Union, and United Kingdom. Meta acknowledged that the spread of such AI-generated content is an ongoing challenge, with creators constantly evolving their tactics to evade detection.

Despite Meta's advertising standards policy prohibiting adult nudity and sexual activity, as well as its "bullying and harassment" policy, the investigation found that "nudify" deepfake tool ads were still present on Instagram even after initial removals. Experts believe that Meta's leadership lacks the will to adequately address the issue, despite having content moderators.

The article highlights the increasing sophistication of deepfake technology and the potential for misuse, especially concerning user consent and online safety. The bipartisan "Take It Down Act" requires social media companies to remove deepfake content within 48 hours of notice from a victim.

However, it does not target the tools used to create such content. The investigation raises concerns about the role of major tech companies in promoting these apps. One "nudify" website promoted on Instagram did not have age verification before uploading a photo to generate a deepfake image.

Experts suggest the need for cross-industry cooperation to prevent such tools from being marketed as "nudification" tools on any platform.