Hoang Oanh (character's name has been changed) is an office worker in Hanoi . During a conversation with a friend via Facebook Messenger, Oanh's friend said goodbye and ended the conversation but suddenly returned to text, asking to borrow money and suggesting to transfer money to a bank account.
Although the account name matched her friend’s name, Hoang Oanh was still a little suspicious and requested a video call to verify. Her friend agreed immediately, but the call only lasted a few seconds due to “internet lag,” according to her friend’s explanation. Having seen her friend’s face on the video call and her voice was the same, Hoang Oanh no longer doubted and transferred the money. However, it was only after the transfer was successful that this user realized that she had fallen into the hacker’s trap.
The case of user Hoang Oanh is just one of many victims who have fallen into the trap of fraud groups that use artificial intelligence technology to create images and voices of victims' friends and relatives to trick them into appropriating their property.
Bkav experts said that in the second half of 2023 and especially around the time of Lunar New Year 2024, this information security company continuously received reports and requests for help from victims about similar fraud cases as above.
According to the analysis of the company's experts, in the case of user Hoang Oanh, the bad guys took control of the Facebook account but did not immediately take over completely, but secretly followed, waiting for an opportunity to pretend to be a victim to ask to borrow money from their friends and relatives.
The scammers used AI to create a fake video of the Facebook account owner’s face and voice (Deepfake). When asked to make a video call to verify, they agreed to accept the call but then quickly disconnected to avoid detection.
Mr. Nguyen Tien Dat, General Director of Bkav's Malware Research Center, emphasized that even when users make a video call and see the face of a relative or friend, and hear their voice, it is not necessarily true that you are talking to that person. Recently, many people have become victims of financial scams using Deepfake and the participation of artificial intelligence technology.
“The ability to collect and analyze user data through AI allows for the creation of sophisticated fraud strategies. This also means that the complexity of fraud scenarios when combining Deepfake and GPT will increase, making fraud detection much more difficult,” said Mr. Nguyen Tien Dat.
Bkav experts recommend that users should be especially vigilant, not provide personal information (ID card, bank account, OTP code, etc.) Do not transfer money to strangers via phone, social networks, or websites that show signs of fraud. When there is a request to borrow/transfer money to an account via social networks, users should use other authentication methods such as calling or using other communication channels to confirm.
Forecasting cyber attack trends in 2024, experts agree that the rapid development of AI not only brings clear benefits but also creates significant risks to cybersecurity.
The biggest challenge for businesses and organizations when facing AI technology today is fraud and advanced persistent threat (APT), with the increasing complexity of fraud scenarios, especially when combining Deepfake and GPT. The ability to collect and analyze user data through AI allows for the creation of sophisticated fraud strategies, making it more difficult for users to identify fraud.
Increasing AI security is an undeniable trend in the coming time. The international community will need to cooperate closely to develop new security measures, along with raising users' knowledge and awareness of the potential risks of AI.
Beware of scams spreading fake brand messages
Warning about 5 online scams during Lunar New Year 2024
Impersonating authorities accounts for 9% of total phishing attacks in Vietnam
Source
Comment (0)