According to the Vietnam News Agency, one of the most common scams today is sending threatening emails demanding payment of fake fees such as traffic fines, incomplete service fees, or access to unauthorized documents. These emails often lead to fake websites, tricking users into providing login information, passwords, or banking data. While phishing emails were previously easily detected due to spelling errors or poor-quality images, now, with the help of AI, they are more polished, personalized, and persuasive.
According to Christophe Axen, Deputy Director of the Regional Computer Crime Unit of the Belgian Federal Judicial Police in Liège, generative AI is now capable of simulating entire conversations, lulling users into a false sense of security. Professor Benoît Donnet (University of Liège) shares this view, warning that AI can be trained to maintain a persuasive tone for extended periods, thereby gaining the victim's trust before tricking them into providing information or opening malicious files.
Particularly dangerous is AI's ability to personalize email content. Cybercriminals can use names, addresses, titles, and social media data to create fake information. In many cases, fake messages are sent from acquaintances or superiors, making victims even more susceptible to falling for the scam.
Scams in the age of AI: Anyone can be a victim. (Image: Internet) |
Cybercriminals are no longer limited to individuals; they now operate as professional organizations with their own leadership, technical departments, accounting, and even cafeterias. They target larger entities such as hospitals, businesses, and government agencies. With AI support, these attacks become more unpredictable – from impersonating executives' emails to distributing documents containing malware.
Another form of fraud that is also on the rise is Deepfake – AI-powered technology that fakes images and voices. According to VnEconomy magazine, cybersecurity expert Ngo Minh Hieu from the National Cybersecurity Monitoring Center (NCSC) warns that perpetrators are using Deepfake to steal videos and images, edit them, and impersonate relatives and friends to scam people into transferring money. Deepfake videos can be identified by stiff facial expressions, mismatched audio, unusual lighting, or situations such as requesting money transfers and then suddenly disconnecting.
Mr. Ngo Minh Hieu advised people to be wary of any strange messages on social media. If there is a request to transfer money, call or FaceTime directly and ask private questions that only acquaintances know. Current AI cannot perfectly simulate a real-time conversation with natural emotions like a human.
Security expert Vu Ngoc Son also believes that AI-powered scams will be the dominant cybersecurity trend in the near future. Malicious actors will continue to exploit new technologies to deceive users in increasingly sophisticated and unpredictable ways. To prevent this, users should absolutely not provide personal information, log into unfamiliar websites, or install software from unknown sources. In addition, government agencies need to strengthen public awareness campaigns and provide guidance on identifying high-tech scams. Network operators and service providers also need to regularly update their technological solutions to promptly detect and prevent fraudulent activities, ensuring user safety.
Source: https://thoidai.com.vn/lua-dao-thoi-ai-ai-cung-co-the-la-nan-nhan-213974.html






Comment (0)