Vietnam.vn - Nền tảng quảng bá Việt Nam

Controlling the Dark Side of Artificial Intelligence

To function, AI needs a “brain” called an AI model – a system of mathematical models and algorithms trained on huge amounts of data. It is this model that allows the machine to learn rules, analyze information, make predictions or create new content…

Báo Nhân dânBáo Nhân dân15/06/2025

Q: What is an artificial intelligence (AI) model and what types of models are popular today?

Answer: To function, AI needs a “brain” called an AI model - a system of mathematical models and algorithms trained on huge amounts of data. It is this model that allows the machine to learn rules, analyze information, make predictions or create new content...

Based on input and output, current AI models can be divided into several groups. The most popular group today is text-to-text, in which users enter text and AI will generate appropriate response text (chatbots such as ChatGPT, Gemini, Claude…). The text-to-image group has input as text description, output as AI-generated image (Midjourney, DALL·E…).

Next is the text-to-audio group, which converts text into speech or sound effects; the image-to-text group, which uses images as input for AI to generate descriptions or analysis; the image-toimage group converts images into other images, often used for tasks such as restoration, style conversion; the image-to- video group, AI will create simulated motion videos based on still images; the video-to-text group allows analysis or summarization of content from videos, applied in information extraction systems, video annotation or surveillance data analysis.

One particularly powerful group today is text-to-video, which converts text into video. With this type of model, users simply enter a short description, and the AI ​​system will generate a video that matches the context, action, lighting, and even camera angle.

Prominent tools such as Pika Labs, Sora (OpenAI) and recently, the global technology community including Vietnam has paid special attention to Veo 3, a tool developed by Google DeepMind. The clips created by Veo 3 are almost absolutely realistic, approaching the level of professional filmmakers, making many people admire.

Creating videos has become as easy as writing a paragraph, which has led to the emergence of a lot of "junk" content created just to attract attention, including videos that are sensational, use vulgar language or contain false information. More dangerously, tools like Veo 3, although not a specialized deepfake tool, can still be exploited to create sophisticated fake images or voices of real people, easily fooling viewers, causing damage to personal reputation, information security and social order, increasing the risk of fraud in cyberspace (Deepfake - is a technology that applies AI to create fake images, videos and sounds that look so real that users have difficulty distinguishing...).

More worryingly, a lot of distorted and false content is created and spread at a dizzying speed, penetrating all over social networks without verification and control. The most accessible and vulnerable subjects are children and adolescents who often lack the skills to distinguish between truth and falsehood, so they are very receptive to toxic content, leaving long-term consequences on perception and behavior.

Q: Given the rapid development of tools like Veo 3, what is the solution to effectively limit the use of AI for improper purposes?

Answer: It can be seen that the current management and legal system is still slower than the development pace of AI technology, creating many legal gaps and unpredictable risks. This requires the early establishment of a proactive, flexible and synchronous management framework; at the same time, raising public awareness in the responsible use of AI.

The responsible development of AI depends primarily on us – those who create, manage and use it. The notion of innovation and management should not be seen as two opposing poles, but as two sides of a process that has been mentioned many times. Therefore, on the one hand, we need to be open to encourage technological development, on the other hand, we need to proactively establish ethical principles and legal corridors appropriate to the new context.

In Vietnam, in addition to continuing to improve institutions, a number of solutions have been mentioned recently, such as the proposal of the AI ​​Ethics Committee of the Vietnam Software and Information Technology Services Association (VINASA) on building an AI Ethics Code of Conduct, to set behavioral limits for both developers and users.

Along with that is the idea of ​​forming a national standard open data language set, with the goal of helping domestic and foreign AI developers have a basis to comply with the political factors, cultural, legal and historical context of Vietnam.

Proposing classification and labeling for AI products is also a viable idea. AI labels should be mandatory and could be categorized by different levels of intervention: from partial editing to full production, or by content, as is already done for games and movies.

Along with that is the establishment of a clear post-audit mechanism, attaching responsibility to the end. From a social perspective, digital literacyeducation for the whole population, especially children and adolescents, is essential; helping to equip the ability to identify harmful content, practicing information verification rules are fundamental solutions in a society increasingly disturbed by virtual content.

Source: https://nhandan.vn/kiem-soat-mat-trai-cua-tri-tue-nhan-tao-post886971.html


Comment (0)

No data
No data

Same category

Braised Pig's Feet with Fake Dog Meat - A Special Dish of Northern People
Peaceful mornings on the S-shaped strip of land
Fireworks explode, tourism accelerates, Da Nang scores in summer 2025
Experience night squid fishing and starfish watching in Phu Quoc pearl island

Same author

Heritage

Figure

Enterprise

No videos available

News

Political System

Destination

Product