Experience from other countries shows that to regulate issues related to AI, including risk control in the development, deployment, and application of AI, legal regulations alone are not enough, but it is necessary to apply other policy tools - called "soft law" such as ethical principles, codes of conduct, and technical guidelines of an industry.
Control measures should be appropriate to the level of risk to avoid over-control. For the highest level of risk, strict, mandatory legal regulations are necessary. For the next level of risk, legal requirements should be “softer”. For low risks, only guidelines, technical standards, and voluntary codes of conduct should be applied. AI risk control should be based on the scope and level of power held by the subjects; subjects with more power, such as the State or BigTech, should be subject to stricter control.
The opportunities and risks of AI for society and humans can be adjusted and responded to according to each stage of the AI system life cycle including: research, development, deployment, and application. In the life cycle of an AI system, it is necessary to clearly and specifically define the responsibilities of each subject: AI developer, deployment/application implementer, and end user.
In building a general legal framework on AI, the following major contents are needed. First , provide a clear concept of AI, AI systems, and AI models as a starting point for other regulatory contents on this technology. Second, identify core principles in AI governance including: taking people, human values, and human rights as the basis for the entire life cycle of the AI system, transparency, accountability, and trustworthiness. Third, identify risk levels and measures to adapt to risk levels. Fourth, identify specific responsibilities of relevant entities in the life cycle of the AI system.
In particular, AI systems should be restricted where they pose a threat to the life or safety of individuals or groups. AI systems that discriminate, for example based on age, gender, religion, or through social scores, should be prohibited, leading to adverse treatment. The general legal framework should require algorithmic transparency in AI decision-making processes to prevent bias and discriminatory outcomes; for example, in recruitment algorithms.
Along with the general legal framework, it is necessary to issue legal documents, possibly decrees and circulars, that provide more specific regulations on technical and specialized issues related to AI. For example, adjusting algorithms to ensure transparency, fairness, and avoid discrimination; regulating the use of facial recognition technology; using AI in some areas with many potential risks such as recruitment, employment, and labor evaluation...
Given the cross-cutting nature of AI, fragmented approaches to AI regulation should be avoided. To coordinate different agencies, it may be worth considering establishing a cross-sectoral mechanism that is representative, effective, and balances different perspectives and viewpoints so that policies are in harmony with different broad interests. Diverse societal stakeholders should be involved from the beginning of the life cycle of AI systems; not only technical experts and businesses, but also legal experts, human rights experts, representatives from communities, civil society organizations, and other groups that may be affected by AI systems.
The promulgation of policies to regulate AI needs to aim for social justice, ensuring the rights and interests of, and the participation of, vulnerable groups such as people with disabilities, ethnic minorities, the poor, women and children. Special protections are needed for these vulnerable groups; AI systems that exploit vulnerabilities, potentially causing psychological or behavioral harm, need to be strictly regulated. This includes ensuring that AI technologies do not manipulate or coerce vulnerable individuals into acting against their interests.
Source: https://daibieunhandan.vn/kiem-soat-rui-ro-cua-tri-tue-nhan-tao-mot-cach-cong-bang-post411511.html
Comment (0)