What do you think about the trend of hackers "weaponizing" AI to carry out cyber attacks and fraud?
Dr. Nguyen Tuan Khang: According to IBM's 2024 X-Force Threat Intelligence Index, Asia- Pacific , including Vietnam, is the region that will suffer the most cyber attacks in the world in 2023. Of which, manufacturing is the industry most affected by cyber attacks.
The main method of bad guys is still phishing attacks targeting vulnerable people and exploiting vulnerabilities to install malware. In addition, the emerging trend in 2024 is cyber attacks involving artificial intelligence (AI).
Wired's report points out that many bad guys are using generative AI to help guide hacks, create fraudulent chatbots, or create Deepfake images and videos that mimic other people's faces and voices.
However, along with this trend, information security systems are also beginning to integrate AI features, such as watsonx. Artificial intelligence can be exploited, but can also replace humans in analyzing, monitoring, identifying numbers, predicting attack scenarios, thereby improving defense capabilities and minimizing information security risks.
Deepfake scams are becoming more and more common. With the rapid development of AI, how dangerous will these attacks be in the future?
Dr. Nguyen Tuan Khang: Basically, Deepfake is a technology that helps hackers create fake digital identities, thereby impersonating others. Deepfake will be a dangerous problem because this technology is becoming more and more sophisticated.
To combat Deepfake, the first thing to do is to determine whether a person’s image or voice is AI-generated. There is currently no universal tool that can detect Deepfakes immediately because attackers are constantly developing new models.
In addition to Deepfake detection, there is another technique to deal with it, which is using technology to analyze behavior. From an organizational and business perspective, it is necessary to develop a system that combines both of these techniques.
In recent times, there have been cyber attacks where hackers have secretly planted malware in the company's system. The malware lies in wait and analyzes all activities, thereby creating a fake identity to carry out malicious intentions. With the development of Deepfake technology, combined with the ability to create videos created by AI, these types of attacks will be much more dangerous in the future.
With the escalation of Deepfake cyberattacks, how can we protect the elderly, children and other vulnerable groups from scammers?
Dr. Nguyen Tuan Khang: The elderly and children are often attacked by scammers using a technique called social engineering. This is a term that describes attacks through the manipulation of human behavior.
Hackers can now use AI in combination with data collection, mining, and analysis to identify people who are likely to be scammed and then find ways to attack. In addition to raising awareness in the community, we must also accept that situations where users are scammed will occur, and must use technology to detect and prevent it.
Recently, there was a case where a bank employee suspected that an old woman who came to transfer money had signs of being scammed. This person then promptly stopped the transaction and reported it to the authorities. The IT systems of banks now have technology to replace humans in such tasks.
The role of technology is that even if the sender is known to be the real person, the system will still stop the transfer if it is suspected that the transaction is being manipulated by someone else. Such tools are called fraud prevention systems.
Is it time for Vietnam to have sanctions to manage AI, and put AI research, development, and use into a framework?
Dr. Nguyen Tuan Khang: The issue of AI management has been discussed for a long time, however, there are still many controversies. For example, the parking lot in my area has an AI system to recognize license plates, but there were still thefts. At that time, the controversy began to arise about whose fault it was. Should the apartment owner, the security guard or the AI system developer be responsible?
Since then, the building has changed its rules, stating that residents can opt to use AI to recognize license plates for convenience, but they have to accept the risks. Those who agree will be able to use the automatic doors, those who don’t will have to park their cars the old way. We need to have sanctions like that.
Similarly, IBM once developed an AI system to help prevent cancer. When the system prescribed medicine, but the patient took it but still could not be saved, is it the doctor’s fault or the AI’s fault?
I think AI regulation needs to be specific, stating clearly what can and cannot be done when developing AI applications. To make the world safer, the most basic regulation we can do is to require large money transfers to be biometrically recognized. In such a situation, people who lose their identity information can completely avoid losing money.
Thank you.
Source
Comment (0)