Deepfake is now a major concern as the number of technology-based scams continues to increase.
After researching on Darknet forums, where cybercriminals often operate, security experts found that there were so many criminals using Deepfake for fraud that the demand far exceeded the supply of Deepfake software currently on the market.
As demand outstrips supply, Kaspersky experts predict that Deepfake scams will increase, with more diverse and sophisticated forms: from offering a high-quality impersonation video to using celebrity images in fake social media livestreams, promising to pay double the amount the victim sent them.
According to the Regula information reference system, 37% of businesses worldwide have encountered Deepfake voice fraud and 29% have fallen victim to Deepfake videos.
This technology has become a threat to cybersecurity in Vietnam, where cybercriminals often use fake video calls to impersonate individuals in order to borrow money from their relatives and friends.
A Deepfake video call can last just a minute, making it difficult for victims to distinguish between real and fake.
Deepfake is gradually becoming a "nightmare" in sophisticated online scams.
“ Deepfake has become a nightmare for women and society. Cybercriminals are exploiting artificial intelligence (AI) to insert victims’ faces into pornographic photos and videos, as well as in propaganda campaigns.
These forms aim to manipulate public opinion by spreading false information, even damaging the reputation of organizations or individuals, ” said Ms. Vo Duong Tu Diem, Kaspersky’s Vietnam Regional Director.
Although AI is being abused by criminals for malicious purposes, individuals and businesses can still use artificial intelligence itself to identify Deepfakes, reducing the probability of success of scams.
Accordingly, users will have some useful solutions to protect themselves from scams such as using AI-generated content detection software (using advanced algorithms to analyze and determine the level of editing of images/videos/sounds).
For Deepfake videos, there are tools available that can detect mismatched mouth movements and speech. Some programs are even powerful enough to detect abnormal blood flow under the skin by analyzing the video resolution because human veins change color when the heart pumps blood.
In addition, there is also a watermark that acts as an identification mark in images, videos, etc. to help authors protect the copyright of AI products. This feature can become a weapon against Deepfake because it helps to trace the origin of the platform that creates artificial intelligence. Users with a level of technology knowledge can find a way to trace the origin of the content to compare how the original data has been edited by the "hands" of AI.
Currently, some emerging technologies use encryption algorithms to insert hash values at set time intervals in the video. If the video has been edited, the hash value will change and from there users can verify whether the content has been tampered with or not.
There have been some tutorials in the past about finding anomalies in videos, for example, color deviations, unnatural movements of muscle groups, eyes... However, AI is getting smarter so these values do not always give correct results.
The process of verifying the reliability of videos is no longer based on the naked eye but requires technological tools created with the purpose of preventing and detecting fake content.
Khanh Linh
Source
Comment (0)