Labeling AI-generated products

Emphasizing the risk control mechanism for AI in today's life, Deputy Nguyen Tam Hung (HCMC) expressed his opinion that the draft law requiring suppliers to ensure machine-readable marking for AI content is very necessary; however, he suggested considering adding a mandatory requirement to display identification information right on the interactive interface when the content is at risk of causing confusion, to protect ordinary users, especially children and the elderly, from the risk of misinformation and deep fakes. This regulation enhances the social responsibility of technology providers and builds digital trust for people.
Along with that, Deputy Nguyen Tam Hung said that through research, he found that the current multi-purpose AI model not only creates text or images, but can also generate source code, attack tools, execution instructions, cyber attack processes, financial fraud methods, etc. From there, the Deputy proposed adding the obligation to control the "self-generating ability" of the model when allowing the generation of source code, and having a mechanism to warn, block, and limit output is necessary to ensure network security and social order and safety.

Also looking at the risks created by AI, Deputy Pham Van Hoa ( Dong Thap ) said that although AI technology is a tool to support humans, it brings many high risks; the Deputy requested the drafting committee to have an objective and accurate assessment of this matter.
“The risks can affect human life, property, rights, and legitimate interests of organizations and individuals,” said Deputy Pham Van Hoa, adding that there should be labels for AI-created products. According to him, labeling AI is to let people know which products are created by AI and which are not.

Meanwhile, Deputy Trinh Thi Tu Anh ( Lam Dong ) said that Article 11 of the draft law does not fully reflect the thinking of assessing risks based on technological impacts. According to her, in the current context, AI can create harmful methods that have never been identified. Therefore, Deputy Trinh Thi Tu Anh proposed adding a provision: "The Government shall specify in detail and periodically update and supplement the list of artificial intelligence systems with unacceptable risks based on risk assessment, technology trends and practical management requirements. According to her, this mechanism ensures that the law does not become outdated and can respond quickly to new risks.
Avoiding AI abuse
Deputy Tran Khanh Thu (Hung Yen) raised the issue of AI in supporting medical examination and treatment, saying that AI not only helps optimize processes in the medical industry, reducing the workload for doctors and medical professionals, but also plays an important role in personalizing treatment, thereby improving health care outcomes and increasing the efficiency of resource use. However, applying AI to detect health signs is sometimes counterproductive.

The female delegate cited a real-life example of a 55-year-old patient in Ho Chi Minh City who was in a lethargic state when he was brought to the hospital. Before that, the patient's family used AI to diagnose, then went to buy medicine for the patient to use. After using the medicine, the patient's condition worsened, then when he was taken to the hospital, he was diagnosed with brain damage, stroke and inevitable sequelae.
“Absolutely, people should not use search results to diagnose or prescribe drugs. However, are people who sell drugs according to AI prescriptions violating the law?”, Deputy Tran Khanh Thu wondered and hoped that in this draft Law on Artificial Intelligence, there would be more specific legal framework provisions to manage artificial intelligence products in the healthcare sector, especially regarding the responsibilities of relevant parties.
Regarding the shift from post-audit to pre-audit in the draft law, Deputy Tran Khanh Thu suggested that there should be a clear quantitative threshold, for example, when the model changes its algorithm, increases the user size, or the training data exceeds a specific threshold.

Looking further at the “abuse of AI”, Deputy Pham Trong Nghia (Lang Son) said that if we depend too much on AI in our lives, it will reduce our thinking ability and creativity. The abuse of AI has many potential instabilities such as: violating national sovereignty, reducing the effectiveness of public administration and negatively affecting each individual and each family, especially the young generation.
“For the public sector, AI abuse reduces the feasibility, humanity and effectiveness of policies and laws. Providing sensitive information and data to AI, if not strictly controlled, can lead to data leaks, harming national security and social order and safety with unpredictable consequences,” said Deputy Pham Trong Nghia, who proposed adding a provision to the draft law stipulating prohibited acts, including prohibiting AI abuse in the public sector.
Source: https://www.sggp.org.vn/can-co-che-danh-gia-rui-ro-lien-quan-toi-ai-post825783.html






Comment (0)