The letter that marked a turning point.
Reuters reported on November 23, citing exclusive sources, that four days before CEO Sam Altman's dismissal, several researchers at OpenAI sent a letter to the company's board of directors warning about a powerful artificial intelligence (AI) discovery that could threaten humanity. The letter, which had never been mentioned before, and the AI algorithm in question, were significant developments leading to the dismissal of Sam Altman, co-founder of OpenAI, on November 17. He returned to his position on November 21 (US time) after two rounds of negotiations with OpenAI. According to the sources, the letter was one of the reasons behind OpenAI's board of directors' decision to fire him, although the official announcement only stated that Altman was "inconsistent and not straightforward in his communication with the board."
Mr. Altman ( right ) and leaders of technology companies discuss at the APEC conference in San Francisco, USA, on November 16.
According to sources, one of OpenAI's long-time senior managers, Mira Murati, mentioned a project called Q* to employees on November 22nd and said that the company's board of directors had received a letter before firing Altman. One of the sources revealed that OpenAI had made progress on the Q* project, potentially a breakthrough in the search for superintelligence, also known as artificial general intelligence (AGI).
Although the new model only performs math at the elementary school level, its ability to solve such problems has led researchers to be very optimistic about Q*'s future success. Researchers view mathematics as a prerequisite for the development of generative AI. Currently, generative AI can write and translate languages, although the answers to the same question can vary greatly. However, conquering the ability to do math, a field with only one correct answer, implies that AI will have better reasoning abilities, similar to humans. Researchers believe this could be applied to new scientific research.
Sam Altman returns as CEO of OpenAI.
Potential danger?
In a letter to the OpenAI board of directors, the researchers outlined the power and potential dangers of AI, according to sources. Computer scientists have long discussed the dangers posed by superintelligent machines, such as whether they might decide to destroy humanity for their own benefit. Against this backdrop, Altman led efforts to make ChatGPT one of the fastest-growing software applications in history and attracted the investment and computing resources needed to move closer to AGI. In addition to announcing a series of new tools at an event this month, Altman told world leaders in San Francisco last week that he believed AGI was on the horizon. A day later, he was fired by the OpenAI board of directors. OpenAI and Altman have not commented on the letter.
Concerned about the potential risks of AI, many Western governments and technology companies have agreed on a new safe testing mechanism. However, according to AFP, UN Secretary-General Antonio Guterres said the world is still "playing catch" in its efforts to regulate AI, which risks having long-term negative consequences for everything from jobs to culture.
OpenAI and Microsoft are being sued over copyright infringement.
The Hill reported on November 23rd that a group of non-fiction authors has filed a lawsuit alleging that OpenAI and Microsoft trained their ChatGPT tool to copy their work without consent. In the lawsuit filed in federal court in New York, the lead plaintiff, Julian Sancton, claims that he and other authors received no compensation for the AI's copied works, while OpenAI and Microsoft achieved financial success from commercializing their work, earning billions of dollars in revenue from AI products. According to Reuters, an OpenAI spokesperson declined to comment on the lawsuit, while a Microsoft representative did not immediately respond to a request for comment.
Source link







Comment (0)