Online fraudsters not only impersonate relatives or police officers, but also control artificial intelligence (AI) tools to create hundreds of scam scenarios to attack users.
Cybersecurity expert Ngo Minh Hieu says that fraudsters manipulate AI, creating hundreds of scam scenarios in minutes - Photo: Vu Tuan
According to the social enterprise Chongluadao.vn, fraudsters have used AI to generate malware, write scam scripts, and alter audio and video through deepfakes.
According to cybersecurity expert Ngo Minh Hieu (Hieu PC) - representative of Chongluadao.vn, a dangerous tactic is "tricking AI" into loading malware. "They create fake audio or image files, embedding malware that the AI doesn't recognize. When the AI system processes it, the malware activates and takes control," Hieu PC explained.
He gave an example: scammers use AI to fake the voice and image of relatives via FaceTime, tricking them into transferring money.
Recently, the frequency of scams has increased significantly due to the support of AI tools. By controlling AI and using it to commit fraud, scammers overcome all language and geographical barriers. The forms of impersonation are becoming increasingly sophisticated and dangerous.
Expert Hieu PC analyzes that no matter what tools are used for fraud, cybercriminals always have a script. This is information his team has drawn from receiving and processing hundreds of online fraud reports.
Common forms of fraud include impersonating relatives, government officials, police officers, or power company employees; more sophisticated methods involve luring victims into investment traps, mission assignments, or dating schemes.
Cybersecurity experts say the first thing to do to avoid image impersonation is to not share personal photos publicly on social media. Calls and messages requesting money transfers, link clicks, or OTP codes are 99% likely scams.
Hacker tactics for attacking AI
According to cybersecurity experts, "adversarial attacks" are a tactic hackers use to "trick" AI. This involves disinformation that misleads or exploits AI, resulting in the AI installing malware or executing commands given by the fraudsters.
Fraudsters exploit this weakness to bypass AI, especially AI-protective systems (such as antivirus software, voice recognition, or bank transaction verification).
Source: https://tuoitre.vn/lua-dao-mang-lua-ca-ai-tao-kich-ban-thao-tung-tam-ly-20250228163856719.htm






Comment (0)