Two American researchers warned that the probability of superintelligent AI destroying humanity could be up to 99%, calling on the world to immediately stop developing uncontrolled AI.
Báo Khoa học và Đời sống•09/10/2025
Two experts Eliezer Yudkowsky and Nate Soares warn that if just one superintelligent AI is born, the entire human race could become extinct. (Photo: The Conversation) In their book If Anyone Builds It, Everyone Dies, they argue that a powerful enough AI will learn by itself, reprogram itself, and view humans as obstacles to its goals. (Photo: Vecteezy)
According to Vox, the real threat lies in the fact that AI can hide its true power and only act when it has complete control of the system. (Photo: Yahoo) The two authors estimate that the probability of humanity being wiped out by AI is between 95% and 99.5%, causing shock in the technology world. (Photo: The Conversation)
They propose extreme measures: preventing the development of superintelligent AI in the first place, even destroying data centers that risk getting out of control. However, many experts such as Gary Marcus argue that this risk is exaggerated, emphasizing the need to build a safety system instead of extreme fear. Yudkowsky's supporters say the warning comes at a timely time, as AI is already creeping into defense, energy, and global infrastructure.
The saying “If we are wrong, there will be no one left to correct us” in the book is considered a wake-up call for all of humanity about the race for survival with the intelligence we ourselves have created. Dear readers, please watch more videos : AI garbage - New problem on social networks VTV24
Comment (0)