The proposed European Union (EU) Artificial Intelligence (AI) Act is causing much controversy. (Illustration photo) |
In an open letter to EU lawmakers on June 30, senior executives from companies including Siemens, Carrefour, Renault and Airbus raised “serious concerns” about the EU’s draft AI law, which, if adopted, would become the first comprehensive law for AI technology globally.
Among the signatories to the letter are big names in the technology industry, such as Yann LeCun, chief AI scientist at Meta (Facebook), and Hermann Hauser, founder of British chipmaker ARM.
A group of more than 160 CEOs warned that the AI Act could damage the EU bloc's competitiveness and trigger an “investment exodus”.
“The Act would jeopardize Europe’s competitiveness and technological sovereignty without effectively addressing the challenges we face and will face,” they said.
These people argue that the provisions in the Act go too far, especially in regulating AI in general and the underlying models of AI, the technology behind popular platforms like ChatGPT.
The Dark Side
Since the AI craze began this year, technologists have warned about the dark side of systems that allow humans to use machines to write college theses, take academic tests, and build websites. Last month, hundreds of leading experts warned of the risk of human extinction due to AI, and that mitigating that risk “should be a global priority, alongside other large-scale risks such as pandemics and nuclear war.”
The EU proposal applies broadly to AI “regardless of its use case” and could push innovative companies and investors out of Europe because they would face high compliance costs and “disproportionate liability risks,” according to the executives.
“Such a regulation could lead to highly innovative companies moving operations abroad and investors withdrawing capital from the European AI sector, resulting in a serious transatlantic productivity gap [between Europe and the US],” they argue.
The executives are calling on policymakers to amend the terms of the bill, which was agreed by European Parliament lawmakers earlier this month and is now being negotiated with EU member states.
“In a context where we know very little about the real risks, business models or applications of AI, European legislation should limit itself to stating general principles in a risk-based approach,” the group wrote.
These business leaders have called for the establishment of a governing council of experts to oversee these principles and ensure they can continually adapt to changes in rapidly evolving technology.
The group also called on lawmakers to work with their US counterparts, noting that AI-related regulations are also being proposed in the US. EU lawmakers should work to “create a legally binding level playing field.”
If such actions are not taken and Europe is constrained by legal requirements, it could damage the EU's international standing, the CEOs warned.
“Like the invention of the Internet or the breakthrough of the silicon chip, AI is the technology that will determine the performance and importance of these different areas,” they say.
Tech experts have increasingly called for greater regulation of AI as it becomes more widely used. In recent months, the United States and China have outlined plans to regulate the technology. Sam Altman, CEO of ChatGPT maker OpenAI, has traveled the world calling for international coordination on AI.
According to the European Parliament, the EU regulations are the “first attempt to enact” legally binding rules that apply to different areas of AI around the world. |
Much controversy
According to the European Parliament, the EU law is “the world’s first attempt to introduce” legally binding rules that apply to various areas of the AI market.
Negotiators of the AI Act hope to reach an agreement before the end of the year, and once the final set of rules is approved by the European Parliament and EU member states, the first formal regulation on AI in the West will become law.
Currently, the proposed bill would ban AI systems deemed harmful, including facial recognition systems in public spaces, predictive policing tools, and social scoring systems.
The details of the draft law that are causing heated debate mainly concern the protection of personal privacy and copyright, such as whether AI should be allowed to track the movements of individuals in public places. China is doing so, but the EU considers it a violation of privacy.
Or whether to allow AI to recognize emotions, after Denmark used voice analysis AI to determine whether emergency callers were showing signs of impending cardiac arrest.
The bill also imposes transparency requirements on AI systems. For example, systems like ChatGPT would have to make it clear that their content is AI-generated and provide safeguards against the creation of illegal content.
Engaging in prohibited AI activities can result in fines of up to €40 million ($43 million) or an amount equivalent to 7% of a company's annual worldwide turnover.
But the penalties will be proportionate and take into account the market position of small-scale suppliers, suggesting there could be “leniency” for startups.
Not everyone is against this bill.
Earlier in July, the Digital Europe trade association, which includes SAP and Ericsson, called the draft rules “a document we can work with.”
“There are still a number of areas that can be improved to ensure Europe becomes a competitive hub for AI innovation,” Digital Europe said.
“We will listen to all concerns and stakeholders when dealing with AI regulation, but we are firmly committed to providing clear and enforceable rules,” Brando Benifei, a member of the Italian parliament who helped draft the bill, told CNN.
“Our work can positively influence the global conversation and direction when addressing AI and its impact on fundamental rights without hindering the pursuit of necessary innovation,” he said.
Source
Comment (0)