Coming out of a three-hour hearing on AI, Elon Musk – the boss of a series of famous technology companies – briefly summarized the risks of this technology: “There is a chance – greater than zero – that AI will kill us all. I think the chance is low but not zero. The consequences of getting AI wrong are extremely catastrophic,” he told reporters.
He also said the meeting “will go down in history for its importance to the future of civilization.”
The session, hosted by Sen. Chuck Schumer, brought together high-profile tech CEOs, civil society leaders, and more than 60 senators. The first of nine sessions aimed at developing consensus as the Senate prepares to draft legislation to regulate the AI industry, also included the CEOs of Meta, Google, OpenAI, Nvidia, and IBM.
All attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters on the afternoon of September 13. But consensus on what that role should be and the specifics of the legislation remain elusive, according to attendees.
Benefits and risks
Bill Gates spoke about the potential of AI for poverty, while an anonymous attendee called for tens of billions of dollars to unlock the benefits of AI, according to Schumer.
The challenge for Congress is to promote those benefits while minimizing the societal risks of AI, including the potential for technology-based discrimination, threats to national security and even, as X-owner Musk puts it, “risks to civilization.”
Maximizing benefits while minimizing harm is a difficult task, Mr. Schumer said.
Senators heard a range of perspectives, with representatives from labor unions raising employment issues, and civil rights leaders stressing the need for an inclusive legislative process that gives a voice to the least powerful in society.
Most agree that AI cannot be left alone, said Washington Democratic Sen. Maria Cantwell. “When it comes to AI, we shouldn’t think about autopilot,” said Microsoft CEO Satya Nadella. “You need to have a partner.”
After the event, Musk told reporters he thinks there will eventually be an independent agency to regulate AI.
Meeting of brilliant minds
Mr. Schumer noted that this was “an unprecedented discussion in Congress.”
It reflects the growing awareness among policymakers that artificial intelligence, and especially generative AI like ChatGPT, can disrupt business and everyday life in a variety of ways, from increasing commercial productivity to threatening jobs, national security, and intellectual property.
The big-name guests arrived just before 10 a.m., with Meta CEO Mark Zuckerberg pausing to chat with Nvidia CEO Jensen Huang outside the Kennedy Caucus Room of the Russell Senate office building. Google CEO Sundar Pichai was seen huddled with Delaware Democratic Senator Chris Coons, while X owner Musk waved to the crowd.
Inside, Musk sat at the back of the room across from Zuckerberg, possibly the first time the two men have been in the same room since they began sparring a few months ago.
The session on Capitol Hill in Washington also gives the tech industry its most significant opportunity yet to influence how lawmakers design rules that could govern AI.
Several companies, including Google, IBM, Microsoft, and OpenAI, have laid out their own in-depth proposals in white papers and blog posts describing layers of oversight, testing, and transparency.
IBM CEO Arvind Krishna argued during the meeting that US policy should regulate the risky use of AI, rather than just the algorithms. “Regulation must take into account the context in which AI is deployed,” he said.
Call for management
Executives like OpenAI CEO Sam Altman have previously surprised some senators by publicly calling for early regulation of AI, which some lawmakers see as a welcome contrast to the social media industry.
Civil society groups have voiced concerns about the potential dangers of AI, such as the risk that poorly trained algorithms could unintentionally discriminate against minorities or that they could copy copyrighted works from writers and artists without permission. Some authors have sued OpenAI, while others have demanded payment from AI companies in open letters.
News publishers like CNN, The New York Times, and Disney have blocked ChatGPT from using their content.
American Federation of Teachers President Randi Weingarten said the United States cannot afford to make the same mistake with AI that it did with social media. “We failed to act after the harm social media caused to children’s mental health became clear,” she said in a statement. “AI should complement, not replace,educators and must take special care to prevent harm to students.”
Policy development
Earlier this summer, Schumer held three information sessions for senators to get up to speed on the technology, including a confidential briefing with U.S. national security officials.
The Sept. 13 meeting with tech executives and nonprofits marked the next phase of educating lawmakers on the issue before they begin developing policy proposals. In June, Schumer stressed the need for a careful, deliberate approach, acknowledging that “in many ways, we’re starting from scratch.”
“AI is unlike anything Congress has tackled before,” he said, noting that the topic is different from labor, health care, or defense. “Experts aren’t even sure what questions policymakers should be asking.”
The goal after holding more sessions is to draft legislation in "months, not years," he added.
A slew of AI bills have emerged on Capitol Hill, seeking to rein in the industry in various ways, but Schumer’s push represents a higher-level effort to coordinate Congress’ legislative agenda on the issue.
The new AI law could also serve as a potential backstop for voluntary commitments that several AI companies made to the Biden administration earlier this year to ensure their AI models undergo testing before they are released to the public.
But even as lawmakers prepare, they are months, if not years, behind the European Union, which is expected to finalize a sweeping AI law by the end of this year that could ban the use of AI for policy predictions and restrict its use in other contexts.
(According to CNN)
Source
Comment (0)