Social media platform X has blocked searches for Taylor Swift after deepfake, sexually explicit images of the singer went viral. When searching for Taylor Swift on the X platform, users see an error message and a request to reload the page. Joe Benarroch, X's Chief Operating Officer, stated that this is a preliminary measure and is being implemented cautiously.
AI-generated pornographic images of Taylor Swift began circulating on X on January 24th, prompting fans to post numerous real photos of the star to counter the deepfake images. They also demanded that X take stronger action to remove these images and report accounts sharing the fake photos. The slogan "protect Taylor Swift" also trended on the platform.
A day later, the reported accounts were suspended or restricted, although AI-generated explicit images were still easily found on X.
In a statement over the weekend, X affirmed that they prohibit sharing images created using graphics without the subject's permission and have a zero-tolerance policy for such content. The statement did not mention Swift by name.
“Our team is actively removing all identified images and taking appropriate action against the accounts that posted them. We will closely monitor the situation to ensure any further violations are addressed immediately and the content is removed,” X said.
Advances in AI generation have made it easier to fake other people's pornographic images. With X, after being acquired by Elon Musk in 2022, many censorship regulations were relaxed, while he laid off thousands of employees to stabilize finances.
According to NBC News, it's unclear where the deepfake images of Taylor Swift originated, but they were watermarked, indicating they came from a website known for distributing fake pornographic images of celebrities. The website even has a dedicated section for AI deepfakes. The fake images of Swift were viewed over 27 million times and received over 260,000 likes within 19 hours of being posted.
Following the incident, Microsoft CEO Satya Nadella called for more regulations to be put in place to govern AI. In an interview with NBC News, he argued that we have a responsibility to oversee this emerging technology so that more safe content can be produced.
Responding to Swift's question about deepfakes, the Microsoft head said: "Yes, we have to act. I think we all benefit from a safe online world . I don't think anyone wants an online world that's completely insecure for both content creators and consumers. So I think we need to act quickly on this issue."
According to 404 Media, a Telegram chat group claimed to have used Microsoft's AI-powered Designer tool to create fake photos of Taylor. Mr. Nadella did not comment directly on 404 Media's report, but in a statement sent to the news site, Microsoft said it is investigating the report and will take appropriate action.
The administration of U.S. President Joe Biden has called the spread of Taylor Swift's pornographic deepfake content "alarming" and urged social media companies to examine their "critical role" in enforcing regulations to prevent the dissemination of misinformation and sensitive images without consent.
In October 2023, President Biden signed an executive order on AI focusing on managing the emerging technology and its risks.
(According to The Hill, WSJ)
Source






Comment (0)