Social network X has had to block searches for Taylor Swift after deepfake pornographic images of the singer went viral. When searching for Taylor Swift on the X platform, users will see an error message and request to reload the page. Joe Benarroch – Director of Business Operations of X – said this is a preliminary action and is being done cautiously.
AI-generated pornographic images of Taylor Swift began circulating on X on January 24, prompting fans of the singer to post a series of real photos of the star to "drown" the deepfake. They also asked X to take stronger action to delete these photos and report accounts sharing fake photos. The slogan "protect Taylor Swift" also became a top trend on the platform.
A day later, the reported accounts were suspended or restricted, but it was still easy to find AI porn on X.
In a statement over the weekend, X said it prohibits sharing of graphic images without the subject's permission and has a zero-tolerance policy for such content. The statement did not mention Swift by name.
“Our team is actively removing all images that have been identified and taking appropriate action against the accounts that posted them. We will be closely monitoring the situation to ensure any further violations are addressed immediately and the content is removed,” X said.
Advances in generative AI have made it easier to fake pornographic images of other people. With X, after it comes under Elon Musk's control in 2022, many censorship regulations have been relaxed, while he has laid off thousands of employees to stabilize his finances.
According to NBC News, it is unclear where the deepfake images of Taylor Swift originated, but they were watermarked to indicate they came from a website known for distributing fake pornographic images of celebrities. The website also has a section dedicated to AI deepfakes. The fake photos of Swift were viewed more than 27 million times and liked more than 260,000 times after 19 hours of being posted.
Following the incident, Microsoft CEO Satya Nadella called for more guardrails to be put in place to regulate AI. In an interview with NBC News, he argued that we have a responsibility to monitor this emerging technology so that more safe content can be produced.
Responding to a question about Swift's deepfake, the Microsoft chief said: "Yes, we have to act. I think we all benefit from a safe online world . I don't think anyone wants an online world that is completely unsafe for both content creators and content consumers. So I think we need to act quickly on this issue."
According to 404 Media, a Telegram chat group said it used Microsoft's AI-generated Designer tool to fake Taylor's photo. Mr. Nadella did not comment directly on 404 Media's information, but in a statement sent to the news site, Microsoft shared that it was investigating the report and would take appropriate action.
The administration of US President Joe Biden called the spread of Taylor Swift's pornographic deepfake "alarming" and called on social media companies to examine their "vital role" in enforcing regulations to prevent the spread of false information and sensitive images without consent.
In October 2023, President Biden signed an executive order on AI focused on regulating emerging technology and its risks.
(According to The Hill, WSJ)
Source
Comment (0)