
For AI to truly become a driving force for national competitiveness, the legal framework needs to be designed to both control risks and promote innovation.
Need for governance framework for AI
Since the introduction of artificial intelligence (AI) into life, the boundary between human creativity and AI has become increasingly blurred, making the issue of copyright and intellectual property rights a new legal challenge. Along with that, the construction of a risk management mechanism in the creation and exploitation of products created by AI has also become urgent, to ensure that technology serves the right purposes and human interests.
AI training data and the copyright problem
One of the prominent issues that need to be regulated by law is the use of data to train AI. This is the foundation for artificial intelligence systems to learn, process information and create new products, but it is also the source of many controversies when training data can include literary works, music , images, press or creative products owned by individuals and organizations. Without clear regulations, this activity can easily lead to copyright infringement, violate the rights of creators, and cause legal and ethical consequences.
From the practice of copyright protection, lawyer Nguyen Thi Thu Hien, BIHACO Trading and Media Services Joint Stock Company, said that currently, many AI systems are trained by automatically scanning and copying data on the internet, collecting millions of works without permission or paying the authors. Several international lawsuits have occurred, when AI businesses were accused of illegally using author data to train AI models. In Vietnam, although there have been no lawsuits, violations are occurring widely.
Many startups have collected content from articles, blogs, e-books, and built Vietnamese chatbots without permission, affecting the author's rights.
Many startups have collected the content of articles, blogs, e-books, and built Vietnamese chatbots without permission, affecting the rights of authors. A common example is chatbots that can summarize and reproduce the content of works... so users do not need to buy books and newspapers, causing authors to lose income and works to be exploited beyond control. A song can be used to train AI to compose music, from which AI creates free background music that competes with copyrighted music. According to lawyer Nguyen Thi Thu Hien, the current Intellectual Property Law is not compatible with the challenge of AI in the issue of AI copyright, the risk of disputes is very high, so there needs to be regulations to protect copyright, as well as promote this new technology.
Master Vuong Tuyet Linh, Banking University of Ho Chi Minh City also said that the explosion of complex works created by AI poses a big challenge to the legal framework on copyright. Therefore, a deep and academic view on this issue is needed to fill the legal gap, ensuring harmony between technology and law.
During the process of drafting the Law amending and supplementing the Law on Intellectual Property, there were opinions suggesting that the exploitation of data protected by copyright and related rights for AI training should be allowed without permission, and this case falls under the limitation and exception of rights. There were also opinions that permission should be sought and payment should be made when copyrighted data is used for AI training. However, if implemented, the problem would be: which unit should pay the appropriate fee, the AI model developer, the application provider or the end user? This is a difficult problem that requires coordination and clarification in the handling mechanism of competent authorities.
AI expands human creativity
The most important part of the AI lifecycle is how it is operated to serve humans. Therefore, there is a need for a legal corridor to guide the governance and responsible development of AI.

According to experts, the risks of AI systems are not only due to technical errors but also due to uncontrolled use and management. An AI model can create data errors, algorithmic bias, and make automated decisions that affect human health, employment, or privacy. When integrated into fields such as finance, healthcare, education , or media, these risks can increase many times over if not monitored early. Therefore, it is necessary to establish a mechanism to assess, classify, and control risks throughout the entire AI life cycle to ensure social safety and user trust.
In fact, the speed of AI application in Vietnam is happening very quickly, showing the urgent need for a clear direction and governance framework. In 2024, about 47,000 Vietnamese enterprises will start applying AI, equivalent to more than 5 enterprises per hour, bringing the total number of enterprises applying AI to nearly 170,000, accounting for about 18% of the total number of enterprises in the country. At the same time, the Vietnamese chatbot market also shows potential and strong growth. In 2024, the chatbot market value reached about 31.2 million USD, and is forecast to increase to 207.1 million USD by 2033 with a compound growth rate (CAGR) of 18.5%. When enterprises first enter AI, they often start with chatbots or smart interactive tools.
In fact, the speed of AI application in Vietnam is happening very quickly, showing the urgent need for clear direction and governance framework.
Therefore, according to Ms. Tran Vu Ha Minh (FPT Software Company), the State should soon issue specific guidelines on deploying chatbots and basic AI tools to comply with requirements on transparency, data protection and accountability. For high-risk or key projects, there should be detailed policies and standards so that businesses can confidently implement them without worrying about legal or technical barriers.
The draft Law on Artificial Intelligence has stipulated a series of obligations for the parties providing and deploying high-risk AI systems, such as maintaining a risk management system throughout the system's life cycle, conducting continuous monitoring and re-assessing the system's risk level, and having a technical mechanism to automatically record activity logs. When an incident occurs, the competent authority requires the supplier or deployer to provide activity logs to determine the cause, determine responsibility and serve as a basis for compensation for damages.
According to lawyer Nguyen Tuan Linh, BMVN International Law Firm, it is necessary to classify risks according to the purpose of use, not to be associated with the underlying technology. Risk assessment should be based on clear, objective criteria and in accordance with international standards; if too broad definitions are given regarding risk classification, it may hinder the development of AI technologies that are not likely to cause harm.
(To be continued)
Source: https://nhandan.vn/bao-ve-ban-quyen-trong-ky-nguyen-ai-bai-1-post921923.html






Comment (0)