Hackers use artificial intelligence for cyber attacks, fake news. |
Specifically, the director of the Canadian Cyber Security Centre, Sami Khoury, shared in an interview this week with Reuters that his agency has discovered AI being used in “phishing emails, malware and misinformation to distort the truth.”
In addition, many cybersecurity organizations have also published reports voicing the hypothetical risks of AI, especially some large language processing (LLM) programs that are trained based on large amounts of user data to create realistic texts, images, videos , etc.
According to a March report by Europol, artificial intelligence models like OpenAI’s ChatGPT make it easy to impersonate an organization or individual with just a basic grasp of English. Also this month, the UK’s National Cyber Security Centre warned of the risk of cybercriminals using LLM to expand their cyberattack capabilities.
Security researchers have demonstrated various examples of malicious AI use cases. Some experts have begun to document suspicious AI-generated content in the wild. Last week, a former hacker reported finding an LLM trained on malicious data and asking it to draft a scenario to trick people into transferring money. The LLM responded with a three-paragraph email, asking for help in an urgent tone.
LLM wrote: “I understand the short notice but this payment is extremely important and needs to be made within 24 hours.”
Source
Comment (0)