Deepfakes are videos and images that are digitally created or altered using artificial intelligence (AI) or machine learning (ML). Pornography created using the technology first began circulating the internet a few years ago when a Reddit user shared fake porn clips with the faces of celebrities.
Photo: GI
Anyone can be a victim.
Since then, deepfake creators have continued to post similar videos and images targeting online influencers, journalists, and others with public profiles. Thousands of such videos exist on numerous websites.
With AI technologies becoming increasingly accessible, this allows users to turn anyone they want into a “porn star” without their consent, or more dangerously, use the technology to defame or blackmail others.
Experts say the problem could get worse with the development of generative AI tools that are trained on billions of images from the internet and generate new content using existing data.
“The reality is that technology will continue to proliferate, it will continue to evolve, and it will continue to become more accessible and easier to use,” said Adam Dodge, founder of EndTAB, a group that provides training on technology abuse. “And as long as that happens, people will inevitably … continue to abuse that technology to harm others, primarily through online sexual violence, deepfake pornography, and fake nudes.”
Noelle Martin of Perth, Australia, has experienced that reality. The 28-year-old found deepfake porn of herself 10 years ago when she Googled her own image out of curiosity one day. To this day, Martin says she has no idea who created the fake porn images or videos. She suspects someone used her social media images to create pornographic videos.
Martin tried to contact the sites to remove the videos, but her efforts were in vain. Some did not respond. Others took them down but quickly reposted them. “You can’t win,” Martin said. “It’s on the internet forever and it can ruin your life.”
Tech companies' efforts to fix it
Some AI models say they have restricted access to pornographic images.
OpenAI said it removed pornographic content from the data it used to train its DALL-E image generator, which should limit the potential for misuse. The company also filters requests and says it blocks users from creating AI images of celebrities and prominent politicians. Midjourney, another model, also blocks the use of certain keywords and encourages users to flag problematic images to moderators.
Meanwhile, Stability AI has also rolled out an update that prevents users from creating explicit images. The changes come after reports that some users were using the technology to create celebrity-inspired nudes.
Stability AI spokesperson Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity. But users can tweak the software and create whatever they want because the company has released the open source code for the app.
Some social media companies have also tightened rules to better protect their platforms from harmful material.
Last month, TikTok said all deepfake or manipulated content must be labeled to indicate it is fake or altered in some way.
Gaming platform Twitch also recently updated its policies on deepfakes after a popular streamer named Atrioc was caught opening a deepfake porn site in his browser during a livestream in late January.
Apple and Google said they recently removed an app from their app stores for running sexually suggestive deepfake videos to market their products.
From Western singer to K-pop idol.
Research on deepfake porn is rare, but a 2019 report by AI firm DeepTrace Labs found that it was almost entirely weaponized against women. Western actresses were the most targeted, followed by K-pop singers.
Meta spokesperson Dani Lever said in a statement that the company's policies restrict both AI-generated and non-AI-generated adult content, and it also restricts deepfake-generating app pages from advertising on its platform.
In February, Meta, as well as adult sites like Only**** and Porn***, began participating in an online tool called Take It Down, which allows teens to report sexually explicit images and videos of themselves from the internet. The site supports both spycam videos and AI-generated videos.
Hoang Ton (according to AP)
Source
Comment (0)