Dark AI operates outside of safety norms, often allowing for behaviors such as fraud, manipulation, cyberattacks, or data mining without close oversight.
“Bad actors are also using AI to enhance their attack capabilities. The most common form of AI abuse today is the emergence of Black Hat GPT models. These are AI models that are specially developed or adapted to serve illegal purposes such as creating malware, drafting phishing emails, generating voices and deepfake videos…”, said Sergey Lozhkin, Head of Global Research and Analysis Team (GReAT) for the Middle East, Turkey, Africa and Asia- Pacific at Kaspersky.
Black Hat GPTs can exist as fully private or semi-private AI models. Some notable examples include WormGPT, DarkBard, FraudGPT, and Xanthorox… all designed to serve cybercrime, fraud, and malicious automation.
Lozhkin revealed that Kaspersky experts are now recording an even more worrying trend, with nation-state or government- backed cyberattack groups starting to leverage large language models (LLMs) in their attack campaigns.
“OpenAI recently announced that it has disrupted more than 20 covert influence campaigns and cyberattacks that were exploiting its AI tools. We will continue to see increasingly sophisticated attackers weaponizing AI, posing a threat to both public and private ecosystems. This is a worrying trend that all organizations need to prepare for,” Lozhkin said.
According to OpenAI’s report, malicious actors have begun to exploit the capabilities of large machine learning models (LLMs) to deceive victims and bypass conventional security layers. They can create convincing fake messages, send messages to victims in bulk, and produce content in multiple languages.
To strengthen defenses against Dark AI threats, Kaspersky experts recommend: Using next-generation security solutions such as Kaspersky Next to detect AI-generated malware and control risks in the supply chain. Applying real-time threat intelligence tools to monitor AI-driven exploits. Strengthening access controls and employee training to limit Shadow AI and the risk of data leakage. Establishing a Security Operations Center (SOC) to monitor threats and respond quickly to incidents.
Source: https://www.sggp.org.vn/kaspersky-canh-bao-toi-pham-mang-dung-dark-ai-tan-cong-post808493.html
Comment (0)