According to Tech.co 's sources, while the controversy surrounding CEO Sam Altman's dismissal and subsequent return to OpenAI was unfolding, one of the rumors circulating concerned a letter sent by the company's researchers to the board of directors, expressing concerns about the developing AI superintelligence model potentially posing a threat to humanity.
(Illustrative image)
Project Q*
The model, internally known as Project Q* (pronounced Q-star), is touted as potentially representing a major breakthrough in OpenAI's pursuit of general artificial intelligence (AGI) – a highly autonomous branch of superintelligence capable of cumulative learning and outperforming humans in most tasks.
According to sources close to the matter, Q* could mark a major leap forward in the field of artificial intelligence by radically improving AI reasoning capabilities and bringing OpenAI closer to a major breakthrough in AGI development.
Unlike current AI models, which focus on generating responses based on pre-learned information, AGI is an autonomous system that can apply "reason" to decisions, giving it the ability to solve problems at a human level.
Although AGI is not yet fully realized, many experts believe that this technology will also have the potential for cumulative learning, another characteristic that enables humans to improve their abilities.
Some sources suggest that Q* – an OpenAI project – has been able to demonstrate the aforementioned properties when solving problems. Furthermore, thanks to the model's immense computing power, Q* has been able to outperform elementary school students, exhibiting reasoning skills and cognitive abilities far superior to those of current AI technology.
It's unclear how long Q* has been in development and what its potential applications might be, but OpenAI informed employees and board members about the project before the personnel controversy occurred.
Ethical concerns
While OpenAI CEO Sam Altman is confident that AGI technology will drive innovation, some researchers have been quick to point out the project's potential dangers.
In a letter to the board of directors, the researchers warned of the potential danger this powerful algorithm poses to humanity. The specific ethical concerns about AI outlined in the letter were not disclosed, but the warnings were sufficient to justify the board's decision to fire Altman.
Meanwhile, the initial reason given for Altman's dismissal was that the CEO had "poor communication skills." Shortly afterward, he found a new position at Microsoft. This action led 700 of OpenAI's 770 employees to threaten to do the same if the CEO was not reinstated.
With the company on the verge of collapse, OpenAI's board of directors was forced to reappoint Altman to the top position – which also led to a major overhaul of the company's executive team and highlighted deep divisions within its leadership.
Now that Altman is back in the role and Project Q* is likely to be given the green light again, this raises new questions.
How realistic is Project Q*?
Although the turbulent times at OpenAI brought the concept of AGI into the spotlight, this isn't the first time Altman has mentioned the technology.
Back in September, the Silicon Valley entrepreneur found himself in trouble after comparing AGI to “an average human being you could hire as a coworker.” He went on to reiterate comments he made last year about how this AI could “do anything you could with a remote coworker,” including learning to be a doctor and a good programmer.
While comparing AGI to the intelligence of an "average human" is nothing new, Altman's use of the phrase was deemed "disgusting" by AI ethicists and Cambridge University professor Henry Shevlin, as concerns surrounding AI's impact on job security escalate.
The potential breakthroughs in AGI are also sounding the alarm for other researchers – that the technology is developing faster than people can fully understand its impact.
OpenAI believes that the positive results of AGI make this risky "minefield" worth trying. However, as the company continues to push in this direction, many worry that Altman is prioritizing commercial success over the benefit of users and society.
Phuong Anh (Source: Tech.co, The Guardian)
Source






Comment (0)