Vietnam.vn - Nền tảng quảng bá Việt Nam

Kaspersky warns cybercriminals use Dark AI to attack

On August 15, security firm Kaspersky issued a warning that cyber attacks are becoming more sophisticated and difficult to detect due to the rise of Dark AI (using AI for malicious purposes, violating the law...) in the Asia-Pacific region (APAC).

Báo Sài Gòn Giải phóngBáo Sài Gòn Giải phóng15/08/2025

Kaspersky security company representative provides information about cybercriminals using Dark AI attacks
Kaspersky security company representative provides information about cybercriminals using Dark AI attacks

Dark AI operates outside of safety norms, often allowing for behaviors such as fraud, manipulation, cyberattacks, or data mining without close oversight.

“Bad actors are also using AI to enhance their attack capabilities. The most common form of AI abuse today is the emergence of Black Hat GPT models. These are AI models that are specially developed or adapted to serve illegal purposes such as creating malware, drafting phishing emails, generating voices and deepfake videos…”, said Sergey Lozhkin, Head of Global Research and Analysis Team (GReAT) for the Middle East, Turkey, Africa and Asia- Pacific at Kaspersky.

Black Hat GPTs can exist as fully private or semi-private AI models. Some notable examples include WormGPT, DarkBard, FraudGPT, and Xanthorox… all designed to serve cybercrime, fraud, and malicious automation.

Lozhkin revealed that Kaspersky experts are now recording an even more worrying trend, with nation-state or government- backed cyberattack groups starting to leverage large language models (LLMs) in their attack campaigns.

“OpenAI recently announced that it has disrupted more than 20 covert influence campaigns and cyberattacks that were exploiting its AI tools. We will continue to see increasingly sophisticated attackers weaponizing AI, posing a threat to both public and private ecosystems. This is a worrying trend that all organizations need to prepare for,” Lozhkin said.

According to OpenAI’s report, malicious actors have begun to exploit the capabilities of large machine learning models (LLMs) to deceive victims and bypass conventional security layers. They can create convincing fake messages, send messages to victims in bulk, and produce content in multiple languages.

To strengthen defenses against Dark AI threats, Kaspersky experts recommend: Using next-generation security solutions such as Kaspersky Next to detect AI-generated malware and control risks in the supply chain. Applying real-time threat intelligence tools to monitor AI-driven exploits. Strengthening access controls and employee training to limit Shadow AI and the risk of data leakage. Establishing a Security Operations Center (SOC) to monitor threats and respond quickly to incidents.

Source: https://www.sggp.org.vn/kaspersky-canh-bao-toi-pham-mang-dung-dark-ai-tan-cong-post808493.html


Comment (0)

No data
No data
The moment the SU-30MK2 "cuts the wind", air gathers on the back of the wings like white clouds
'Vietnam - Proudly Stepping Forward to the Future' Spreads National Pride
Young people hunt for hair clips and gold star stickers for National Day holiday
See the world's most modern tank, suicide UAV at the parade training complex
The trend of making cakes printed with red flag and yellow star
T-shirts and national flags flood Hang Ma Street to welcome the important holiday
Discover a new check-in location: The 'patriotic' wall
Watch the Yak-130 multi-role aircraft formation 'turn on the power boost, fight round'
From A50 to A80 - when patriotism is the trend
'Steel Rose' A80: From steel footsteps to brilliant everyday life

Heritage

Figure

Enterprise

No videos available

News

Political System

Destination

Product