Deepfakes are now a major concern as the number of scams using this technology continues to rise.
After researching Darknet forums, where cybercriminals frequently operate, security experts found that so many criminals are using Deepfake software for fraud that demand far exceeds the supply of Deepfake software currently available on the market.
As demand exceeds supply, Kaspersky experts predict a surge in Deepfake scams, with increasingly diverse and sophisticated methods: from providing high-quality impersonation videos to using celebrity images in fake live streams on social media, promising to double the amount victims sent them.
According to the Regula information reference system, 37% of businesses worldwide have experienced voice-based Deepfake scams, and 29% have become victims of video-based Deepfake scams.
This technology has become a threat to Vietnam's cybersecurity, where cybercriminals often use fake video calls to impersonate individuals in order to borrow money from their relatives and friends.
A Deepfake video call can last as little as a minute, making it very difficult for victims to distinguish between a real and a fake call.
Deepfakes are gradually becoming a "nightmare" in sophisticated online scams.
" Deepfakes have become a nightmare for women and society. Cybercriminals are exploiting artificial intelligence (AI) to superimpose victims' faces onto pornographic images and videos, as well as in propaganda campaigns."
"These forms of manipulation aim to control public opinion by spreading misinformation, even damaging the reputation of organizations or individuals, " shared Ms. Vo Duong Tu Diem, Regional Director of Kaspersky Vietnam.
Although AI is being misused by criminals for nefarious purposes, individuals and businesses can still use artificial intelligence to identify deepfakes, reducing the likelihood of successful scams.
Accordingly, users will have several useful solutions to protect themselves from scams, such as using AI-generated content detection software (which uses advanced algorithms to analyze and determine the extent to which images/videos/audio have been edited).
For deepfake videos, there are now tools that help identify mismatched movements between the mouth and speech. Some programs are even "powerful" enough to detect abnormal blood flow under the skin by analyzing the video resolution, because when the heart pumps blood, the veins in a person's body change color.
In addition, watermarks serve as identifying marks in images, videos, etc., helping authors protect the copyright of their AI products. This feature can become a weapon against Deepfakes because it helps trace the origin of the platform that created the artificial intelligence. Tech-savvy users can find ways to trace the source of the content to compare how the original data has been modified by AI.
Currently, some emerging technologies use encryption algorithms to insert hash values at set time intervals within the video. If the video has been edited, the hash value will change, allowing users to verify whether the content has been tampered with.
Previously, there were some guidelines on finding anomalies in videos, such as color distortion, unnatural muscle movements, eye movements, etc. However, as AI becomes increasingly intelligent, these values don't always yield accurate results.
The process of verifying the credibility of videos no longer relies on visual inspection but requires technological tools designed to prevent and detect fake content.
Khanh Linh
Source






Comment (0)