Introduction to the Problem
Meta has removed a number of ads that promote "nudify" apps, which are AI tools used to create sexually explicit deepfakes of real people, after a CBS News investigation found hundreds of such advertisements on its platforms.
Meta’s Response
A Meta spokesperson stated, "We have strict rules against non-consensual intimate images; we removed these ads and deleted the pages responsible for running them and permanently blocked the URLs associated with these apps." The removal of these ads is in line with Meta’s advertising standards, which prohibit nudity, representations of people in explicit or sexually suggestive positions, and activities that are sexually suggestive.
The Extent of the Problem
CBS News discovered dozens of these ads on the Meta Instagram platform, promoting AI tools that can upload a photo and "see everyone naked". Other ads promoted the ability to upload and manipulate videos of real people. Some of the URL links from the ads led to websites that promoted the ability to superimpose images of real people into sex scenes, with some of the applications charging between $20 and $80 for access to these "exclusive" and "advanced" features.
Analysis of the Advertisements
An analysis of the advertisements in the Meta advertising library showed that at least hundreds of these advertisements were available on the company’s social media platforms, including Facebook, Instagram, Threads, the Facebook Messenger application, and the Meta Audience Network – a platform that allows Meta to advertise to users on mobile apps and websites with the company’s partners.
Target Audience and Challenges
According to Meta, many of these advertisements were specifically geared towards men between the ages of 18 and 65 and were active in the USA, the European Union, and the United Kingdom. The spread of this type of AI-generated content is a persistent problem, and Meta has increasingly demanding challenges when trying to combat it. The people behind these exploitative apps constantly develop their tactics to evade detection, making it difficult for Meta to strengthen its enforcement.
Deepfakes and the Law
Deepfakes are manipulated pictures, audio recordings, or videos of real people that have been altered using artificial intelligence to make it seem like the person is saying or doing something they did not actually say or do. Last month, President Trump signed a bipartisan law that requests websites and social media companies to remove deepfake content within 48 hours after a victim has been notified. Although the law makes it illegal to "knowingly publish" or distribute intimate images without a person’s consent, including AI-generated deepfakes, it does not target the tools used to create this content.
Industry Collaboration
Alexios Mantzarlis, director of the Trust, Security, and Safety Initiative at the Tech Research Center at Cornell University, has been studying the increase in AI deepfakes on social platforms for over a year. He believes that the management of Meta lacks the will to tackle the problem, although the moderators have the capacity to do so. Mantzarlis also stated that he found in his research that deepfake generators are available on the Apple and Google Play Store app, which frustrates the inability of these massive platforms to moderate their content effectively. There must be a collaboration with the industry, where the app or website itself is marketed as an instrument for creating any location on the web, then everyone else can be: "Okay, I don’t care what they present on my platform, they are gone."
Conclusion
The promotion of such apps by major technology companies raises serious questions regarding user consent and online safety for minors. A CBS News analysis of a "nudify" website advertised on Instagram showed that the website did not prompt an age check before a user uploaded a photo to generate a deepfake. Such problems are widespread, and data also show that a high percentage of minor teenagers have interacted with deepfake content.