On social media site X (formerly Twitter), AI-generated fake images of Taylor Swift showing the singer in a sexually explicit pose have racked up tens of millions of views. While they have since been removed from the social media platform, nothing on the internet is truly gone forever, and they will undoubtedly continue to be shared on other, less regulated platforms.
Singer Taylor Swift, arguably the most famous star in the world today. Photo: Getty Images
Like most major social media platforms, X's policy prohibits sharing "synthetic, manipulated, or out-of-context media that could mislead or confuse people and lead to harm."
The incident has raised concerns about AI, as bad actors can use it to create misleading images and videos , fueling widespread misinformation.
“This is a prime example of how AI is being deployed for nefarious purposes without adequate safeguards,” said Ben Decker, who runs digital forensics agency Memetica. “The exploitation of creative AI tools to create malicious content targeting celebrities is growing rapidly and spreading faster than ever on social media.”
“Social media companies don’t really have effective plans for monitoring content,” he said. X, for example, has laid off most of its content moderation staff, relying solely on automated systems and user reports. Meta has also cut teams that tackle misinformation and harassment on its platform.
The incident also coincided with the explosion of AI tools like ChatGPT and Dall-E. Along with these tools, Decker said, there has also been a rise in unsafe AI models for unvetted work in open source platforms.
“This is a sign of a major rift in content moderation and platform governance between AI companies, social media companies, regulators, and civil society. Unless they all agree on how to address this rift, this type of fake content will continue to proliferate,” he added.
But Decker said the world’s biggest celebrity, Taylor Swift, being targeted for fake content could draw more attention to the growing issues surrounding AI-generated imagery. This week, the singer’s massive fan base around the world took to social media to express outrage, making the issue a top concern.
When influential figures like Taylor Swift fall victim to AI, lawmakers and tech companies will be pushed to act under media pressure, Decker said.
This type of technology has been widely used by scammers to create pornographic images of others and post them online for malicious purposes. But it was only when celebrities like Taylor Swift fell victim that this type of scamming technology gained widespread public attention.
In the US, nine states have enacted laws against the non-consensual creation or sharing of deepfakes, which are synthetic images created to mimic a person's likeness.
Hoai Phuong (according to CNN)
Source
Comment (0)