Generative AI is posing a host of cybersecurity risks, especially in the wake of widespread cyber scams in Hong Kong, according to a cybersecurity firm that is deploying AI to combat the threat.
Phishing emails and voice-spoofing calls generated by artificial intelligence are becoming more sophisticated and harder to detect. (Photo: Shutterstock)
In Hong Kong, scammers have defrauded people out of HK$4.8 billion ($611.5 million) through online chats, phone calls and text messages. AI-generated audio, video and text are making these types of scams harder to detect.
The emergence of generative AI tools like ChatGPT will allow certain types of scams to become more prevalent and effective, said Kim-Hock Leow, Asia CEO of Wizlynx Group, a Switzerland-based cybersecurity services company.
“We can see that voice and video mimicry is becoming more and more realistic and it can be used by those looking to steal information and compromise corporate cybersecurity,” he shared.
Governments are starting to take action against the rise of online scams. In February, the Beijing Municipal Public Security Bureau warned in a statement on WeChat that scammers could use AI to “commit crimes and spread rumors.” And in March, the U.S. Federal Trade Commission issued a warning about scammers using AI-generated voices to impersonate people using only a short audio clip of their voice from online.
“Everyone is susceptible to phishing attacks, but they are easily detected due to length, misspellings, or lack of context about you and your work. But now, cybercriminals can use new AI language models to increase the complexity of their phishing emails,” said Kim-Hock Leow .
The way these scams are done is by using a tool like ChatGPT to scan and professionalize the language of the messages. This can include quickly conducting background research to add contextual information to personalize the scam emails.
Financial firms are now deploying ChatGPT themselves to combat these sophisticated scams. They use chatbots to generate phishing emails for anti-fraud training purposes, as well as identify vulnerabilities and conduct research on cybersecurity systems.
“Based on the knowledge and data that AI can collect and generate over time, cybersecurity experts can use it to accurately identify vulnerable and risky areas of the security system,” Mr. Leow shared.
“We need to encourage cybersecurity professionals and other industries to use ChatGPT itself to improve their defenses. In a way, it is a double-edged sword that will be used for both cybersecurity and cybercrime,” he added.
The terms of service from OpenAI, the creator of ChatGPT, prohibit the use of its technology for illegal purposes. But there is still a risk that bad actors could bypass ChatGPT’s filters, Leow said.
Cybercrime is expected to cost $8 trillion globally this year in stolen money, lost assets and lost productivity, according to a report from Cybersecurity Ventures.
In the face of this threat, an AI arms race could be brewing in cybersecurity. “In the coming years, we will see security teams effectively embrace AI to improve threat recognition and automate much of the defense process,” said David Fairman, chief information officer for Asia- Pacific at Netskope.
Khanh Ly (Source: SCMP)
Useful
Emotion
Creative
Unique
Wrath
Source
Comment (0)