AI is becoming a powerful assistant for cybercriminals. Photo: Cato Networks . |
Not only ordinary users, cybercriminals are also increasingly using AI to create more sophisticated attacks than ever. In less than 2 years, AI tools to support hackers have rapidly developed in terms of features and effectiveness.
Adrian Hia, Director of Kaspersky Asia and Japan (APJ), said that Dark AI plays a huge role in the rapid growth of malware. Statistics from Kaspersky show that nearly half a million pieces of malware are created every day.
"Dark AI is like the dark web - the dark side of the web that we use every day," said Mr. Hia.
Sergey Lozhkin, Head of Kaspersky's Global Research and Analysis Team (GReAT) for Asia- Pacific (APAC) and Middle East, Africa, Turkey (META) regions, said that Dark AI is completely different from conventional AI chatbots, such as ChatGPT or Gemini.
The AI that users are familiar with is created to assist users and perform legitimate tasks, and even if they are ordered to create malware or perform illegal activities, users will have difficulty achieving their goals.
Meanwhile, Dark AI “is actually LLM or chatbots created solely to perform malicious activities in cyberspace,” Lozhkin said. These AIs are trained to generate malware, phishing content, or even a complete framework for cybercriminals to use for fraud.
Hidden AI system for reputable hackers only
These systems, which Mr. Lozhkin calls Black Hat GPT or “black hat GPT,” have been around since 2023. Kaspersky representatives said that at that time, the AI was very basic, created very bad code and could be detected immediately.
However, recently black hat GPT systems have become more complete, able to provide many functions, from creating malicious code, fraudulent content, to impersonating voices and videos . Some names include WormGPT, DarkBard or FraudGPT.
![]() |
Sergey Lozhkin, a Kaspersky researcher, warns that the quality of Dark AI is getting better and better. Photo: Minh Khoi. |
One of Kaspersky’s most disturbing discoveries was the existence of private Dark AI systems that are only available to reputable cybercriminals. To gain access to these private AI systems, hackers must build up a reputation on underground forums.
“You can only access these types of systems if you are trusted. You need to build a reputation on underground forums to get access to these assistants,” said Kaspersky expert.
Lozhkin said Kaspersky has a “Digital Footprint Intelligence” unit that spends years infiltrating underground forums and monitoring and tracking activity. Once the company has access to new systems, it continues to spend a lot of time understanding how Dark AI works, its functions, and its effectiveness.
Dark AI is getting stronger
Dark AI is becoming an effective tool for skilled hackers. For those with only basic programming knowledge, asking a ransomware generator may only create software that is easily recognized by antivirus. However, for skilled and experienced hackers, Dark AI can increase coding productivity, creating new malware.
"This quality depends a lot on each specific platform or Dark AI. Some generate very low quality code, but there are also tools that can reach 7-8 points out of 10," Mr. Lozhkin shared with Tri Thuc - Znews .
One example is the ability to create polymorphic code. Previously, when a piece of malicious code was detected by antivirus, it took hackers a long time to change the code. Now, Dark AI can do this "nearly instantly" to create a version that is harder to detect.
Additionally, dark LLMs are breaking the language barrier in phishing attacks. Kaspersky experts say that the new generation of phishing emails is harder to detect because of the almost perfect English, which can even “look exactly like the CEO’s voice.”
![]() |
Some "all-in-one" Dark AI systems can generate code, integrate voice impersonation, and send phishing emails. Photo: Kaspersky. |
The ability to create fake voices and images of the specialized Dark AI system is also increasingly perfect. Mr. Lozhkin even said that he "no longer believes in video calls" thanks to his experience as a security worker.
However, experts from Kaspersky also said that users can still recognize a scam call through personal behavior, speaking habits and some unique characteristics.
“These features are difficult to fake, so try to contact the person you believe is being faked directly,” the security researcher told Tri Thuc – Znews .
"AI Arms Race"
The development of Dark AI has sparked a technological arms race between cybercriminals and cybersecurity professionals. “I personally use AI for everything now – research, programming, reverse engineering. My productivity this year is 10-20 times higher than before,” Lozhkin admits.
"If you don't use AI to create a defense mechanism while the bad guys use AI to attack, you will lose. This is the beginning of a race between the good guys and the bad guys using AI," he warned.
![]() |
Adrian Hia, Director of Kaspersky APJ, said that Dark AI is helping to create malware faster. Photo: Minh Khoi. |
Google reported blocking more than 20 malicious activities from state-sponsored groups attempting to use ChatGPT to create malware and plan attacks. In 2024, they identified 40 government-backed APT groups using AI models.
Particularly worrisome are full-fledged frameworks like Xanthorox, which offers a “full package” service for $200 a month. The system integrates voice generation, phishing emails, and other attack tools into an easy-to-use interface.
Kaspersky experts warn that Dark AI is still in its infancy. In the next few years, Adrian Hia predicts that the cost of using it will probably be much cheaper, and it is not yet known how far it will develop.
“But what I am absolutely sure of is that we will be ready, because we are doing the same thing but from the opposite side,” Mr. Lozhkin commented.
Source: https://znews.vn/moi-nguy-khi-toi-pham-so-huu-ai-den-post1574400.html
Comment (0)