The model's ability to reason, solve complex mathematical problems and answer scientific research questions is seen as a major step forward in the effort to develop artificial general intelligence (AGI) - machines with human-like cognitive abilities.
OpenAI said it was particularly “cautious” about how it brought the o1 model to the public given its advanced capabilities. Photo: Getty Images
According to the Financial Times, OpenAI rated the risk of these new models as “medium” for issues related to chemical, biological, radiological and nuclear (CBRN) weapons, the highest the company has ever assessed. This means the technology has “significantly improved” the ability of experts to create biological weapons.
AI software capable of detailed reasoning, especially in the hands of bad actors, can increase the risk of abuse, experts say.
Professor Yoshua Bengio, a world- leading AI scientist from the University of Montreal, has emphasized that this medium risk level increases the urgency for AI regulations, such as SB 1047, currently being debated in California. This bill requires AI makers to take measures to reduce the risk of their models being misused to develop biological weapons.
According to The Verge, the security and safety of AI models has become a major concern as technology companies like Google, Meta, and Anthropic are racing to build and improve advanced AI systems.
These systems have the potential to bring great benefits in helping humans complete tasks and assisting in a variety of areas, but also pose challenges in terms of safety and social responsibility.
Cao Phong (according to FT, Reuters, The Verge)
Source: https://www.congluan.vn/openai-thua-nhan-moi-hinh-ai-co-the-duoc-su-dung-de-tao-ra-vu-khi-biological-post312337.html
Comment (0)