Deepfakes are videos and images created or digitally altered using artificial intelligence (AI) or machine learning (ML). Pornographic content created using this technology first began circulating on the internet a few years ago when a Reddit user shared fake porn clips featuring the faces of celebrities.
Photo: GI
Anyone can become a victim.
Since then, deepfake creators have continuously posted similar videos and images targeting online influencers, journalists, and others with public profiles. Thousands of such videos exist on numerous websites.
With AI technologies becoming increasingly accessible, this allows users to turn anyone they want into "pornographic film stars" without their consent, or, more dangerously, use this technology to defame or blackmail others.
Experts say the problem could worsen with the development of generalized AI tools trained on billions of images from the internet and generating new content using existing data.
“The reality is that technology will continue to proliferate, will continue to evolve, and will continue to become easier and more accessible,” said Adam Dodge, founder of EndTAB, a group that provides training on technology abuse. “And as long as that continues, people will… continue to abuse that technology to harm others, primarily through online sexual violence, deepfake pornography, and fake nudity.”
Noelle Martin from Perth, Australia, experienced this firsthand. The 28-year-old discovered her own deepfake pornographic images 10 years ago when, out of curiosity, she used Google to search for pictures of herself. To this day, Martin says she doesn't know who created these fake pornographic images or videos. She suspects someone used images from her social media to create the pornographic videos.
Despite Martin's attempts to contact the websites to have the videos removed, her efforts were in vain. Some didn't respond. Others took them down but quickly re-uploaded them. "You can't win," Martin said. "It will stay on the internet forever and could ruin your life."
Tech companies' efforts to rectify the situation.
Some AI models have indicated that they have restricted access to pornographic images.
OpenAI says it has removed pornographic content from the data used to train its DALL-E image generation tool, which will limit the potential for misuse. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians . Midjourney, another model, also blocks the use of certain keywords and encourages users to flag problematic images for moderators.
Meanwhile, Stability AI has also released an update to prevent users from creating sensitive images. These changes come after reports that some users were creating celebrity-inspired nude photos using the technology.
Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques such as image recognition to detect nudity. But users can manipulate the software and create whatever they want because the company has released the open-source code for the application.
Some social media companies have also tightened their rules to better protect their platforms from harmful material.
Last month, TikTok said all deepfake or manipulated content must be labeled to indicate that it is fake or has been altered in some way.
The gaming platform Twitch recently updated its policies on deepfake images after a popular streamer named Atrioc was found to be accessing a deepfake porn website on his browser during a live stream in late January.
Apple and Google recently announced they removed an app from their app stores for running sexually suggestive deepfake videos to market products.
From Western singers to K-pop idols.
Research into pornographic deepfakes is not widespread, but a 2019 report published by the AI company DeepTrace Labs showed that it is almost entirely weaponized against women. The most targeted individuals are Western actresses, followed by K-pop singers.
Meta spokesperson Dani Lever said in a statement that the company's policy restricts both AI- and non-AI-generated adult content, and also limits the deepfake creation app's page from advertising on its platform.
In February, Meta, along with adult websites like Only**** and Porn***, began participating in an online tool called Take It Down, which allows teenagers to report their own pornographic images and videos from the internet. The site supports processing both secretly filmed videos and AI-generated videos.
Hoang Ton (according to AP)
Source






Comment (0)