Social network X has had to block searches for Taylor Swift after deepfake pornographic images of the singer went viral. When searching for Taylor Swift on the X platform, users will see an error message and a request to reload the page. Joe Benarroch - X's business operations director - said this was a preliminary action and was done out of caution.

ofdhxx6v.png
Nude photos of Taylor Swift created by AI flooded X and Facebook before being removed. (Photo: Reuters)

AI-generated pornographic images of Taylor Swift began circulating on X on January 24, prompting fans of the singer to post a series of real photos of the star to “drown” the deepfake. They also asked X to take stronger action to delete these photos and report accounts that shared fake photos. The slogan “protect Taylor Swift” also became a top trend on the platform.

A day later, the reported accounts were suspended or restricted, but it was still easy to find AI porn on X.

In a statement over the weekend, X said it prohibits sharing of graphic images without the subject's permission and has a zero-tolerance policy for such content. The statement did not mention Swift by name.

“Our team is actively removing all images that have been identified and taking appropriate action against the accounts that posted them. We will be closely monitoring the situation to ensure any further violations are addressed immediately and the content removed,” X said.

Advances in generative AI have made it easier to fake pornographic images of other people. With X, after Elon Musk acquired it in 2022, many censorship regulations were relaxed, while he laid off thousands of employees to stabilize his finances.

It’s unclear where the Taylor Swift deepfakes originated, but they were watermarked to indicate they came from a website known for distributing fake celebrity pornographic images, according to NBC News. The site even has a section dedicated to AI deepfakes. Swift’s fake photos were viewed more than 27 million times and liked more than 260,000 times within 19 hours of being posted.

Following the incident, Microsoft CEO Satya Nadella called for more guardrails to be put in place to regulate AI. In an interview with NBC News, he argued that we have a responsibility to monitor this emerging technology so that more safe content can be produced.

Responding to a question about Swift's deepfake, the Microsoft chief said: "Yes, we have to act. I think we all benefit from a safe online world . I don't think anyone wants a completely unsafe online world for both content creators and content consumers. So I think we need to act quickly on this issue."

According to 404 Media, a Telegram chat group said it used Microsoft's AI-generated Designer tool to fake Taylor's photo. Mr. Nadella did not comment directly on 404 Media's information, but in a statement sent to the news site, Microsoft shared that it was investigating the report and would take appropriate action.

The Biden administration has called the spread of Taylor Swift's pornographic deepfake "alarming" and called on social media companies to examine their "vital role" in enforcing regulations to prevent the spread of false information and sensitive images without consent.

In October 2023, President Biden signed an executive order on AI focused on regulating emerging technology and its risks.

(According to The Hill, WSJ)