Vietnam.vn - Nền tảng quảng bá Việt Nam

Data security in the age of AI.

Deepfakes, phishing attacks, and digital identity theft are becoming more sophisticated and can strike at any time, targeting anyone lacking digital defense skills.

Báo Sài Gòn Giải phóngBáo Sài Gòn Giải phóng18/11/2025

Attack from every angle.

Google has launched a new artificial intelligence (AI) product called "Gemini Nano Banana," which is not a large-scale linguistic AI (LLM) capable of writing and reasoning, but rather an AI specializing in photo editing... Many users are excited about this tool, but it also brings with it a number of drawbacks.

S4c.jpg
Tran Hung Dao Primary School in Cau Ong Lanh Ward, Ho Chi Minh City, uses a facial recognition system to take attendance and send notifications to parents via an app. Photo: HOANG HUNG

“It’s like a ‘step backward’ in AI technology and ethical limitations,” said Nguyen Hong Phuc, Chief Scientific Officer of Conductify AI, expressing his personal opinion. “Gemini Nano Banana has effective image editing capabilities. But Google’s uncensoring of Gemini Nano Banana, loosening the barriers and limiting mechanisms, allows this AI to create images that are incredibly similar to celebrities, suggestive images, sensitive images, etc.,” Phuc said.

According to Mr. Nguyen Hong Phuc, this capability makes Gemini Nano Banana a tool for creating realistic fake images, to the point that even AI experts cannot distinguish them, leading to new risks of fraud and cybersecurity, such as creating fake photos with famous politicians to spread false information, and even individuals being subjected to AI-generated photo manipulation and defamation on social media...

Furthermore, with dozens of other AI photo editing tools taking the internet by storm, deepfake scams show no sign of stopping. Statistics from security organizations show that deepfake technology creates phishing campaigns targeting high-value individuals, especially business leaders. In 2024, there were 140,000-150,000 cases globally, with 75% targeting CEOs and senior executives... Deepfakes could increase global economic losses by 32%, approximately $40 billion per year by 2027.

Recently, AI company Anthropic detected and thwarted a cyberattack campaign carried out by foreign hackers, which for the first time recorded the use of AI to coordinate automated attack activities. Specifically, the hackers used AI systems to plan, direct, and deploy intrusion operations – a “worrying” development that risks expanding the scale and speed of cyberattack campaigns…

Mr. Ngo Minh Hieu, Director of the Anti-Fraud Organization of the National Cybersecurity Association, shared that AI-integrated tools increase the speed of fraud by about 20 times compared to using real people. Fraudsters provide the AI ​​with a fraud scenario for analysis, and the system immediately automatically chats with the victims. From the initial data, the AI ​​can understand what the victim wants and communicate with them in real time. The most sophisticated tactic is the "Man-in-the-Middle" attack, where when a user performs video authentication (such as recording their face or giving commands), the hacker intervenes to interfere with the transmitted data. Cybercriminals then insert pre-prepared deepfake videos, replacing the user's real data stream to bypass the authentication process. With this method, the security systems of banks and financial institutions are also at risk of being deceived.

Better data management

As AI becomes increasingly prevalent, security and privacy risks are also significantly increasing. Data stored and processed on remote servers will become an attractive target for hackers and cybercriminals. According to data from the National Cyber ​​Security Association, in the second quarter of 2025 alone, the number of AI-powered cyberattacks increased by 62%, causing global losses of up to $18 billion...

Experts believe data protection is crucial. However, the collection and sale of data continues openly on many "black market" platforms, for as little as $20 per month. Cybercriminals can use tools that mimic language models but are customized for criminal purposes. These tools are capable of generating malware and even bypassing antivirus software.

In Vietnam, Decree No. 13/2023/ND-CP (effective from April 17, 2023) regulates the protection of personal data. Furthermore, the Law on Personal Data Protection, expected to take effect on January 1, 2026, will open up a stronger legal mechanism to address the risk of personal data leaks and misuse. However, according to the National Cybersecurity Association, the effectiveness of enforcement still needs to be strengthened across three pillars: raising public awareness, increasing corporate responsibility, and improving the processing capacity of regulatory agencies. Beyond technical measures, each individual needs to develop the ability to identify unusual and suspicious signs and proactively protect themselves from dangerous digital interactions…

Kaspersky has warned about the exploitation of Dark AI (a term referring to the deployment of large language models (LLMs) that operate outside of standard security controls, often enabling actions such as phishing, manipulation, cyberattacks, or data abuse). Organizations and individuals need to prepare to face increasingly sophisticated and hard-to-detect cyberattacks due to the rise of Dark AI in the Asia-Pacific region.

Sergey Lozhkin, Head of Global Research and Analysis Team (GReAT) for the Middle East, Turkey, Africa, and Asia-Pacific regions at Kaspersky, shared that one manifestation of current AI abuse is the emergence of Black Hat GPT models. These are AI models specifically developed or modified to serve unethical and illegal purposes, such as creating malware, crafting fluent and persuasive phishing emails for large-scale attacks, creating deepfake voices and videos, and even supporting simulated attacks.

Kaspersky experts recommend that individuals and businesses use next-generation security solutions to detect AI-generated malware, while also prioritizing data protection. Businesses should implement real-time monitoring tools to track AI-driven vulnerability exploitation; strengthen access control and employee training to limit AI in the dark zone and the risk of data leaks; and establish a cybersecurity operations center to monitor threats and respond quickly to incidents.

Source: https://www.sggp.org.vn/bao-mat-du-lieu-thoi-ai-post824210.html


Comment (0)

Please leave a comment to share your feelings!

Same tag

Same category

Same author

Di sản

Figure

Enterprise

News

Political System

Destination

Product

Happy Vietnam
Than Vu Tunnel on the highway

Than Vu Tunnel on the highway

A glimpse into life in Ho Chi Minh City.

A glimpse into life in Ho Chi Minh City.

Rocky outcrops

Rocky outcrops