The world is not ready for artificial general intelligence (AGI), or the moment when artificial intelligence is as smart as the human brain, according to a senior researcher at OpenAI.
Is General Artificial Intelligence a Risk?
For years, researchers have speculated about the emergence of artificial general intelligence, or AGI, when artificial systems will be able to handle a wide range of tasks as well as humans. Many see its emergence as an existential risk, as it could allow computers to act in ways that humans could never imagine.
According to Mr. Miles Brindage, the world is not ready for the moment of artificial general intelligence (AGI).
Now, the man tasked with ensuring ChatGPT developer OpenAI is ready for AGI admits that neither the world nor the company itself is “ready” for the next step. Miles Brundage was “OpenAI’s senior advisor on AGI readiness,” but announced his departure this week as the company disbanded the group.
“Neither OpenAI nor any other pioneering lab is ready [for AGI], and neither is the world. To be clear, I don’t think this is a controversial statement among OpenAI’s leadership, but rather a question of whether the company and the world are on track to be ready at the right time,” said Brundage, who has spent six years helping shape the company’s AI safety initiatives.
Brundage’s departure marks the latest in a series of high-profile departures from OpenAI’s safety teams. Jan Leike, a prominent researcher, left after saying that “safety culture and processes have taken a backseat to shiny products.” Co-founder Ilya Sutskever also left to launch his own AI startup focused on developing safe AGI.
Brundage's disbanding of his "AGI Readiness" group comes just months after the company disbanded its "Superalignment" group dedicated to mitigating long-term AI risks, exposing tensions between OpenAI's original mission and the company's commercial ambitions.
Profit pressure drives OpenAI away from safety
OpenAI reportedly faces pressure to transition from a nonprofit to a for-profit company within two years — or risk losing funding from its recent $6.6 billion investment round. This shift toward commercialization has long been a concern for Brundage, who expressed reservations as far back as 2019 when OpenAI first established its for-profit division.
OpenAI CEO Sam Altman is having a "headache" due to the pressure the company has to find a profit.
In explaining his departure, Brundage cited increasing restrictions on his freedom to research and publish at the iconic company. He emphasized the need for an independent voice in AI policy discussions, free from industry bias and conflicts of interest. Having advised OpenAI’s leadership on internal preparations, he believes he can now have a greater impact on global AI governance from outside the organization.
The departures also reflect a deeper cultural divide within OpenAI. Many researchers joined to advance AI research and now find themselves in an increasingly product-driven environment. Internal resource allocation has become a flashpoint — reports suggest that Leike’s team was denied computing power for safety research before it was disbanded.
OpenAI has faced questions in recent months about its plans for developing artificial intelligence and how seriously it takes safety. Although it was founded as a nonprofit dedicated to researching how to build safe artificial intelligence, ChatGPT’s success has brought in a large investment and pressure to use the new technology for profit.
Source: https://www.baogiaothong.vn/the-gioi-chua-san-sang-phat-trien-ai-thong-minh-nhu-bo-nao-con-nguoi-192241027005459507.htm
Comment (0)