In a blog post on August 30, the UK Cyber Security Centre (NCSC) said that experts still do not fully understand the potential cybersecurity issues associated with algorithms that can interact “like humans”, called large language models (LLM).
The NCSC warns that this can pose risks, especially when such models are embedded into business processes. Academics and researchers have also recently found ways to “trick” AI, by giving it rogue commands or circumventing built-in safeguards.
Generative AI tools are being used as chatbots, which are expected to replace not only internet searches but also customer service and sales calls. |
For example, an AI chatbot deployed at a bank could be tricked into making unauthorized transactions if hackers set up the right “instructions.”
So, when referring to recent AI beta releases, the NCSC warns “Organizations building services around LLMs need to be as cautious as if they were using a new product in beta.”
A recent Reuters/Ipsos survey found that many corporate employees are using tools like ChatGPT to perform basic tasks like drafting emails, summarizing documents, or gathering initial research data.
Of these, only about 10% of employees said their superiors explicitly forbade the use of external AI tools, and 25% did not know whether the company allowed the use of this technology.
The race to integrate AI into business operations will have “catastrophic consequences” if business leaders do not conduct the necessary checks, said Oseloka Obiora, chief technology officer at security firm RiverSafe. “Instead of chasing the AI trend, leaders should think carefully and conduct an assessment of the benefits and risks and the security measures needed to protect the business.”
Fake news and hackers
Authorities around the world are also looking to regulate the proliferation of LLMs, such as OpenAI’s ChatGPT. Concerns about the security of new technologies are seen as a policy focus, with the United States and Canada recently discovering more cases of hackers increasingly using generative AI to commit illegal acts.
One hacker even said he found an LLM that had been “trained” with malicious material and asked it to build a money transfer scam. The AI responded by generating an email asking the recipient to pay an urgent bill.
In July 2023, the Canadian Cyber Security Centre observed an increase in the use of AI in “email fraud, malware, disinformation and misrepresentation”. The centre’s director, Sami Khoury, asserted that cybercriminals are actively exploiting this new technology.
This leader also added that although the use of AI to compose malware is only in its early stages, AI models are developing so quickly that it will easily lead to the point where it is impossible to control their malicious risks.
Previously, in March 2023, the European police organization Europol also released a report saying that ChatGPT can “impersonate organizations and individuals in a very realistic way even when using only basic English”. In the same month, the UK National Cyber Security Center affirmed that “LLM can be used to support cyber attacks”.
Source
Comment (0)