The UK Cyber Security Agency has warned organisations that are integrating artificial intelligence (AI) chatbots into their products and services, saying research shows that AI tools can easily be manipulated to perform malicious tasks.
In a blog post on August 30, the UK Cyber Security Centre (NCSC) said that experts have yet to grasp the potential cybersecurity issues associated with algorithms that can interact “like humans,” namely large language models (LLMs).
This poses risks when such models are incorporated into business processes, the NCSC said, with academics and researchers constantly finding ways to “trick” AI, instructing chatbots to commit fraud or bypass built-in safeguards.
For example, an AI chatbot used at a bank could be tricked into making an unauthorized transaction if the hacker gives the right “instructions.”
“Organizations building services around LLMs need to be cautious in the sense that they are using a new product in beta,” the NCSC said, referring to recent AI releases.
A recent Reuters/Ipsos poll found that many corporate employees are using tools like ChatGPT to perform basic tasks like drafting emails, summarizing documents and gathering initial research data.
Of these, only 10% of employees said their superiors explicitly forbade the use of external AI tools, and 25% did not know whether their company allowed the use of this technology.
The rush to integrate AI into business models will have “catastrophic consequences” if business leaders do not conduct the necessary due diligence, said Oseloka Obiora, chief technology officer at security firm RiverSafe. “Instead of jumping on the AI bandwagon, senior executives should think carefully and conduct a risk/benefit assessment, as well as the security measures needed to protect the business.”
Fake news and hackers
Around the world , governments are also looking to manage the rise of LLMs like OpenAI’s ChatGPT. Security concerns about the new technology are also a policy focus, with the US and Canada recently saying they have seen hackers increasingly exploiting generative AI.
One hacker even said he found an LLM that had been “trained” with malicious material and asked it to build a money transfer scam. In response, the AI generated a three-paragraph email asking the recipient to pay an urgent bill.
In July 2023, the Canadian Cyber Security Centre noted the increased use of AI in “email fraud, malware, misinformation and disinformation.” Sami Khoury, the centre’s director, did not provide specific evidence, but asserted that cybercriminals are actively exploiting this emerging technology.
The leader also said that although the use of AI to draft malware is still in its early stages, AI models are developing so quickly that it is easy to lose control of their malicious risks.
Earlier, in March 2023, the European police organization Europol published a report saying that ChatGPT could “impersonate organizations and individuals in a very realistic way, even when using only basic English.” That same month, the UK National Cyber Security Center said that “LLM could be used to support cyberattacks.”
(According to Reuters)
Source
Comment (0)