SGGP
In a bid to tighten AI regulation, key European Union (EU) lawmakers have agreed on several amendments to draft artificial intelligence (AI) regulations to curb generative AI, such as ChatGPT.
Panasonic Connect Corporation has helped employees in Japan use AI systems to improve productivity. Photo: Nikkei Asia |
Expecting the first complete law
The decisive vote on the AI regulation bill on May 11 marked a new step in the process of passing a law that will officially regulate AI tools like ChatGPT. The European Parliament's (EP) consumer protection and civil liberties committees approved the draft text, affirming the view that the use of AI in the EU needs to be regulated, while promoting innovation in this field, but respecting fundamental rights, according to which AI must serve people, society and the environment.
After two years of discussion, the AI Act that is expected in the EU will become the first complete law to regulate this technology, because it has added provisions prohibiting the use of facial recognition technology in public places (which is predicted to cause conflicts between EU countries) and tools using algorithms to predict criminal behavior, generative AI applications such as OpenAI's ChatGPT, biometric checks... Accordingly, these applications must send notifications to remind users that the products of these tools are created by machines, not humans.
The document also includes a section that calls for additional criteria to identify high-risk areas for AI applications, thereby limiting the scope of tool design. AI tools will be classified according to the level of risk each tool can pose. Governments and companies using these tools will be subject to different obligations depending on the level of risk.
The draft text will be presented to the full EP next month before being sent to EU member states for review and finalization. While the list proposed by the European Commission (EC) already includes AI use cases in critical infrastructure management, education , human resources, public order and immigration, EP MEPs also want to add thresholds to delineate threats to security, health and fundamental rights.
Japan will take the lead
Many countries are also looking for solutions to the problem of both preventing domestic industries from falling behind and addressing citizens' privacy concerns.
In Asia, the Japanese government’s first-ever Artificial Intelligence Strategy Council was convened to establish a framework to guide the development of AI. Speaking to the council, Prime Minister Fumio Kishida said: “AI has the potential to change our economic society in a positive way, but there are also risks. It is important to address both issues appropriately.”
The use of AI technology will contribute to enhancing industrial competitiveness, solving problems of the whole society, but AI must be used reasonably and minimize risks for users. However, up to now, discussions have mainly focused on technical aspects. Japanese experts urge that in the coming time, discussions need to be conducted on the basis of a broader perspective, with the participation of fields such as business and law. Nikkei Asia said that a challenge facing Japan is how to improve the level of domestic AI development in parallel with the focus on regulating the use of general AI, in which security, privacy and copyright are important issues.
AI is starting to disrupt everyday life as fake images and videos, as well as robot-generated text, raise concerns ranging from national security to misinformation. Digital and technology ministers from the Group of Seven (G7) have agreed to compile guidelines on the development and use of general AI by the end of this year. With Japan taking over the G7 presidency in 2023, Prime Minister Kishida has said Japan will take the lead in formulating international rules to make the most of the promise and deal with the risks of artificial AI.
Like Japan, the White House announced last week that it would invest $140 million to establish seven AI research centers and publish guidelines on the use of this advanced technology with the goal of creating rules that minimize risks but do not hinder the development of AI-based innovations. Speaking before the Council of Advisors on Science and Technology, US President Joe Biden emphasized that AI can help deal with some very difficult challenges such as disease and climate change, but we must also address potential risks to society, the economy and national security. Technology companies have a responsibility to ensure their products are safe before they hit the market.
Source
Comment (0)