Nonconsensual sexually explicit deepfakes of Taylor Swift were widely circulated on X on Wednesday, racking up over 27 million views and over 260,000 likes within 19 hours before the account responsible was suspended. These deepfakes, which portrayed Swift nude and in sexual scenarios, are a troubling example of the proliferation of AI-generated content and misinformation on various online platforms.
The origin of the photos is not clear, but they have a watermark indicating that they came from a website known for fabricating fake nude images of celebrities. Reality Defender, an AI-detection software company, reviewed the images and suggested that they were likely created using AI technology.
Despite the emergence of this issue, social media platforms, such as X, which have their own generative-AI products, have yet to develop effective tools to detect and remove this type of content that violates their guidelines.
In the case of Taylor Swift, it appears that a mass-reporting campaign by her fans was instrumental in getting the deepfakes removed from the platform. Similarly, a 17-year-old Marvel star had previously spoken out about facing sexually explicit deepfakes on X, with limited success in getting them removed.
Furthermore, the lack of federal laws in the U.S. governing the creation and dissemination of nonconsensual sexually explicit deepfakes is a significant issue. While a bill criminalizing these deepfakes at the federal level was introduced in May 2023, it has not made significant progress.
Carrie Goldberg, a lawyer who represents victims of deepfakes, emphasized the importance of tech platforms utilizing AI to identify and remove these harmful images. While there are rules in place against deepfakes, enforcing them effectively remains a challenge.
Ultimately, addressing the spread of deepfakes requires a concerted effort from tech platforms, lawmakers, and the public to protect individuals from the harmful effects of nonconsensual sexually explicit content.
– Nonconsensual sexually explicit deepfakes of Taylor Swift went viral on X
– Origin unclear but likely created with AI technology
– Tech platforms like X have yet to deploy or discuss tools to detect generative-AI content
– Swift has faced misogynistic attacks for attending NFL games with her partner
– Consistently slow or ineffective response from X to remove deepfakes
– U.S. lacks laws governing the creation and spread of nonconsensual sexually explicit deepfakes
– Legislation introduced to criminalize deepfakes at the federal level in the U.S.
– Victims largely left to fight against deepfakes on their own, despite platform policies against them
– AI technology can be used to identify and remove deepfake content
#TaylorSwift #deepfakes #AI #Xplatform #sexuallyexplicit #manipulatedmedia #misogyny #celebritydeepfakes #Swiftfans #onlineattacks #protectTaylorSwift #federaldeepfakelaw #nonconsensualimages #techplatforms #AItechnology