In recent events, explicit AI-generated images featuring Taylor Swift have circulated on X (formerly Twitter), exemplifying the growing challenge of controlling the spread of fake AI-generated pornography.
One notable instance on X garnered over 45 million views, 24,000 reposts, and numerous likes before the account responsible was suspended for violating platform policies. Despite removal, these images persisted on other accounts, sparking discussions and trending topics in certain regions.
X’s guidelines explicitly prohibit synthetic and manipulated media, as well as nonconsensual nudity, yet the platform has faced criticism for not promptly addressing the issue.
Swift’s fan base has taken action by flooding hashtags associated with the explicit content with messages promoting authentic clips of Swift to counter the proliferation of fake images.
This incident highlights the significant challenge in preventing the spread of deepfake porn and AI-generated images featuring real individuals.
While some AI image generators impose restrictions on generating explicit content involving celebrities, many others lack such safeguards.
The responsibility for curbing the dissemination of fake images often falls on social platforms, presenting difficulties, particularly for platforms with limited moderation capabilities like X.
It’s worth noting that X is currently under investigation by the EU for alleged involvement in disseminating illegal content and disinformation.
The company is also facing scrutiny regarding its crisis protocols following instances of misinformation about the Israel-Hamas war being promoted on the platform.