This model is capable of reasoning, solving complex mathematical problems, and answering scientific research questions, which is considered a significant step forward in the development of general artificial intelligence (AGI) – machines capable of human-like cognition.
OpenAI says it is being particularly “cautious” about how it brings the o1 model to the public because of its advanced capabilities. Photo: Getty Images
According to the Financial Times, OpenAI has rated the risks of these new models as "medium" for issues related to chemical, biological, radiological, and nuclear (CBRN) weapons, the highest rating the company has ever given. This means the technology has "significantly improved" the ability of experts to create biological weapons.
According to experts, AI software capable of detailed reasoning, especially in the hands of malicious actors, could increase the risk of abuse.
Professor Yoshua Bengio, a leading AI scientist from the University of Montreal, emphasized that this moderate level of risk further increases the urgency of enacting regulations on AI, such as the SB 1047 bill currently being debated in California. This bill requires AI manufacturers to take measures to reduce the risk of their models being misused to develop biological weapons.
According to The Verge, the security and safety of AI models has become a major concern as tech companies like Google, Meta, and Anthropic race to build and improve advanced AI systems.
These systems have the potential to offer significant benefits in helping people complete tasks and assisting in various fields, but they also pose challenges regarding safety and social responsibility.
Cao Phong (according to FT, Reuters, The Verge)
Source: https://www.congluan.vn/openai-thua-nhan-mo-hinh-ai-moi-co-the-duoc-su-dung-de-tao-ra-vu-khi-sinh-hoc-post312337.html






Comment (0)