The Chinese government has issued 24 guidelines requiring platform providers to register their services and conduct security assessments before launching them on the market. Seven agencies will be responsible for oversight, including the Cyberspace Administration of China (CAC) and the National Development and Reform Commission.
The regulation shows China’s desire to regulate AI—the most promising and controversial technology of the past 30 years—a stance that echoes Europe but contrasts with the US, which has not taken it seriously even as industry leaders have warned of its dangers.
Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who is writing a series of papers on AI, said China has started very quickly. They are starting by building the tools and mechanisms so they are ready to regulate complex applications of the technology. These include practices that can be applied in other countries.
For example, Beijing requires labels on AI-generated content such as photos and videos; it also requires all companies to use “legitimate data” to train AI models and disclose that data to authorities if necessary; and domestic companies must provide clear mechanisms to handle public complaints about a service or content.
While the US’s hands-off approach has given Silicon Valley tech companies the space to grow into giants, it poses serious risks to generative AI, according to Andy Chun, an AI expert and adjunct professor at the University of Hong Kong. He said AI has the potential to profoundly change the way people work, live, and play in ways we are only just beginning to realize. It also poses clear risks and threats to humanity if AI is developed without oversight.
In the United States, federal lawmakers have proposed a series of AI regulations, but their efforts are still in their early stages. The U.S. Senate is holding several meetings in the summer of 2023 to help members understand the technology and risks before pursuing regulation.
In June 2023, the European Parliament adopted a draft AI Act, imposing new protection and transparency requirements for AI systems. The Parliament, member states and the European Commission must negotiate the final terms before the draft becomes law.
Beijing has spent years laying the groundwork for AI rules. The State Council introduced an AI roadmap in 2017, prioritizing technology development and setting a timeframe for government regulations. Agencies like the CAC then consult with academics and industry players to strike a balance between regulation and innovation.
Beijing's initiative is driven in part by the strategic importance of AI and its desire to gain an advantage over other governments, according to You Chuanman, director of the Center for Global Regulation and Governance at the Institute of International Affairs at the University of Hong Kong.
Beijing has made AI a technology priority, and after two years of crackdowns, the government has turned to the private sector to help revive its flagging economy and compete with the U.S. Billions of dollars have been pumped into the AI sector since ChatGPT went viral.
Alibaba, Baidu, and SenseTime all introduced AI models this year. SenseTime CEO Xu Li announced a chatbot that can write code from English or Chinese prompts. But Chinese companies still lag behind Western companies like OpenAI and Google. They struggle to challenge their rivals, especially when American companies are controlled by no one but themselves.
According to Helen Toner, Director of the Georgetown Center for Security and Emerging Technology, China is trying to “walk a tightrope” between goals such as supporting the AI ecosystem, maintaining social control, and censoring and manipulating the domestic information environment.
In the US, OpenAI has shown little control over information even when it is dangerous or inaccurate. ChatGPT has created fake legal precedents and given bomb-making instructions to the public. In China, meanwhile, companies have had to be more cautious. In February, Yuanyu Intelligence had to suspend its ChatYuan service just days after it launched because of a comment about the Russia-Ukraine war. The startup has now abandoned it entirely to focus on its KnowX productivity service.
In the US, Big Tech employs “armies” of lawyers and lobbyists to fight almost any legal action, making it difficult to enact effective AI regulations, points out Aynne Kokas, an associate professor of media studies at the University of Virginia.
In Europe, the bill would ensure privacy controls and restrict the use of facial recognition software, requiring companies to conduct risk analysis of their services to the health system or national security. But the EU’s approach has also met with resistance, with OpenAI threatening to “shut down” in countries with overly burdensome regulations.
(According to Bloomberg)
Source
Comment (0)