Vietnam.vn - Nền tảng quảng bá Việt Nam

The threat of extinction and the paradox of the AI ​​era!

Công LuậnCông Luận01/11/2023


In May, CEO Altman filed a petition with the US Senate in Washington, DC, with an urgent call to lawmakers: In addition to taking advantage of AI's superiority, we also need to create strict regulations soon to minimize the risk of it overpowering humanity.

Artificial intelligence threatens extinction and the paradox of the age image 1

ChatGPT interface on the website. Photo: Shutterstock

“Profound risks to society and humanity”

With the launch of OpenAI's ChatGPT late last year, Altman, 38, emerged overnight as the creator of a new breed of AI tools that can generate images and text on demand from users, a technology known as generative AI.

ChatGPT became a household name not long after its release. Executives used it to compose emails, people built websites with no prior coding experience, and it passed exams in law and business schools. It had the potential to revolutionize nearly every industry, including education, finance, agriculture , media, journalism, and health, from surgery to vaccine development.

But those same tools have raised concerns about everything from cheating in schools and job losses — even threats to the very existence of humanity.

The rise of AI, for example, has economists warning about the labor market. Goldman Sachs estimates that as many as 300 million full-time jobs worldwide could eventually be automated in some way by AI. About 14 million jobs could disappear in just the next five years, according to an April report from the World Economic Forum.

Additionally, in testimony before the US Congress , Altman said the potential for AI to be used to manipulate voters and target disinformation is one of “the areas that I am most concerned about.”

Two weeks after the hearing, Altman joined hundreds of leading AI scientists, researchers, and business leaders in signing a letter stating: “Minimizing the risk of extinction from AI must be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.”

The warning has been widely reported in the press and media. And a growing number of people are saying that such apocalyptic scenarios need to be taken more seriously.

But there's a big paradox in Silicon Valley: CEOs of some of the biggest tech companies are telling the public that AI has the potential to cause human extinction, yet they're racing to invest in and deploy the technology to billions of users.

Altman has long expressed concern about the risks posed by AI and has pledged to move forward responsibly. He is among a number of tech CEOs meeting with White House leaders, including President Joe Biden and Vice President Kamala Harris, to stress the importance of developing AI ethically and responsibly.

But that’s not enough. Others want Altman and OpenAI to act more cautiously. Even Elon Musk, who helped found OpenAI before leaving the group, and dozens of tech leaders, professors, and researchers have called on artificial intelligence labs like OpenAI to stop training their most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

Altman said he agreed with parts of the letter, including that “safety levels need to be increased,” but said a pause would not be the “optimal way” to address the challenges.

OpenAI is still keeping its foot on the gas pedal. Most recently, OpenAI and iPhone designer Jony Ive were reportedly in talks to raise $1 billion from Japanese conglomerate SoftBank for an AI device to replace smartphones.

Can Sam be trusted?

When he started OpenAI, Altman told CNN in 2015 that he wanted to shape the path of AI, rather than worry about potential harms and do nothing. “I sleep better knowing that I can have some impact now,” he declared.

Still, Altman said he was concerned about the technology. “I’m prepared for survival,” he said in a 2016 New Yorker article, noting a number of potential catastrophic scenarios, including “AI attacking us.”

Artificial intelligence threatens extinction and the paradox of the age image 2

OpenAI CEO Sam Altman speaks before the US Senate on May 16 in Washington, US. Photo: AP

“But I have guns, gold, antibiotics, batteries, water, gas masks… and a huge piece of land in Big Sur that I can fly to,” he declared, implying that he personally could survive if disaster struck. It was clearly a selfish statement.

In perhaps the most sweeping effort yet, President Biden announced an executive order earlier this week requiring developers of powerful AI systems to share the results of their safety tests with federal authorities before releasing them to the public, to see if they pose a national security, economic, or health risk.

After the US Senate hearing, Emily Bender, a professor at the University of Washington and director of the Computational Linguistics Laboratory, expressed concerns about the future of AI even if it is strictly regulated. “If they honestly believe this could lead to the extinction of the human race, why not just stop it?” she said.

Good policymaking needs to be informed by many perspectives and interests, not just one or a few, and shaped by the public interest, said Margaret O’Mara, a technology historian and professor at the University of Washington. “The challenge with AI is that very few people and companies really understand how it works and what it means to use it, like the atomic bomb,” O’Mara said.

So it would be risky for the world to count on Altman to act in the best interests of humanity with a technology that he himself admits could be a weapon of mass destruction and has prepared a shelter in case of disaster. Besides, he is only human, not a superhero or a savior!

Hoang Hai (according to CNN, FT, Reuters)



Source

Comment (0)

No data
No data

Heritage

Figure

Business

No videos available

News

Political System

Local

Product