In fact, this is not the first time scientists have warned about the prospect of artificial intelligence becoming dangerous to humans. In 2014, famous British theoretical physicist Stephen Hawking said: "The comprehensive development of artificial intelligence could destroy humanity." He also said that humans will be completely replaced by artificial intelligence in the next 500 years if they are not careful in the process of researching and developing this dangerous technology. Eight years ago, in July 2015, more than 1,000 experts in artificial intelligence technology, including Stephen Hawking, Elon Musk, Noam Chomsky... signed a petition warning about the risk of an arms race in the field of military AI and calling for a comprehensive ban on autonomous weapons. OpenAI’s launch of Chat GPT in November 2022 has added another warning bell to the existential dangers of artificial intelligence. In fact, what Chat GPT is doing shows that AI is, to some extent, much more dangerous than people once thought.
The potential risks of artificial intelligence depend on how it is used. AI was initially developed to assist humans in their work, especially with boring and repetitive tasks such as image classification, information review, etc. However, with the rapid development of AI in recent times, scientists and technology experts are concerned that if developed uncontrolled, or programmed for the wrong purpose, artificial intelligence can cause serious consequences for humans and society.
Risk of insecurity
Artificial intelligence could pose a threat to national security and personal safety. Elon Musk has warned that AI could become a terrifying tool in war. If a country can develop a superior artificial intelligence system, it could use it to attack other countries. Stuart Russell, a professor of computer science at the University of California, has spent decades researching artificial intelligence. Warning about the security threat posed by AI, he said that we can now produce autonomous attack weapons by integrating and miniaturizing existing technologies.
The experiment was conducted by Professor Stuart Russell and the Future of Life Institute (FLI) using a killer robot (slaughterbot), a microscopic flying device equipped with cameras, sensors, image processing software, facial recognition, a 3-gram explosive, and a high-speed microprocessor, allowing data analysis and response 100 times faster than the human brain. According to programming parameters, the killer robot continuously accesses cloud data to search for information about the target and find a way to approach it. Once found, it will rush straight at the target at high speed, activate the 3-gram explosive, drill deep into the skull, killing the victim in the blink of an eye.
It’s just an experiment with AI at its most basic level. If AI is used to develop new, more sophisticated attack methods, it could also increase the attackers’ attack capabilities and lead to more serious consequences than conventional attacks. When AI is developed to the point where it can make decisions on its own to deal with changes in the surrounding environment or find alternative targets or expand the target range, then perhaps humanity will no longer be safe. Many colleagues in the field agree with Professor Stuart Russell and believe that there is still a chance to prevent such a bleak future, but we don’t really have much time left.
The risk of AI becoming organized, autonomous and controlling the system
AI can be designed incorrectly or improperly “trained” through machine learning and can perform unwanted actions, causing damage to people and the environment. Specific examples are as follows: With machine learning, AI is becoming increasingly intelligent. When artificial intelligence reaches a level of intelligence that goes from supporting important systems such as electricity, transportation, healthcare , finance, etc., it can master and control all of these systems and make and execute decisions in emergency situations. When AI is “equipped” with additional “ambitious” purposes (misdesigned), it can cause many serious consequences such as disrupting the traffic system by disabling the traffic light system or cutting off the power to the urban train operating system, causing a series of accidents, causing widespread power outages, etc.
There have been many Hollywood and many other countries' movies based on this scenario. However, with current AI technology, this is no longer a distant prospect but can completely become a reality. Elon Musk believes that if AI is allowed to develop uncontrollably, to the point where it can automate decisions without human intervention, this could be a threat to human survival. That is why he and thousands of technology experts signed a letter requesting a temporary halt and strict control of the AI development process in a transparent manner. According to Elon Musk, artificial intelligence systems are very complex, difficult to understand and controlling them is very difficult. Without transparency, the use of artificial intelligence for unethical purposes, causing damage to humans, will certainly occur.
Risk of job loss, privacy violation
When used to collect personal information, artificial intelligence technology can be used to violate privacy, conduct disinformation activities, and spread false information that affects social and political stability. The incident in which the US government's Office of Personnel Management (OPM) network system was attacked by hackers is noteworthy for any country or organization when detailed and highly sensitive personal data of 21.5 million current and retired US officials, information about their spouses, data about people who have applied for government jobs such as health status, residence, work, fingerprints and financial information fell into the hands of hackers. However, there is an even more frightening reality when the CIA discovered that the OPM data thief used artificial intelligence technology to scan, fake personal information, and create virtual spies to steal data. The stolen information will be analyzed, classified, and exploited by Bigdata technology for specific purposes. Artificial intelligence technology helps the information thief shorten the time to exploit information because humans are not capable of processing such a huge amount of information in such a short time.
In addition to the above-mentioned existing dangers, AI technology also creates other risks such as: Over-reliance on it, leading to the consequence of people losing control of the situation and reducing the ability to make decisions. Artificial intelligence will also cause millions, even hundreds of millions of people to lose their jobs, especially for purely manual labor jobs with little knowledge content. As a result, serious social problems such as unemployment, crime, the gap between rich and poor... will become deeper. In addition, reliability is also an issue that needs to be mentioned. The reliability of decisions in highly specialized fields such as medicine, criminal law... is extremely important. When using AI to help make decisions in those fields where the technology is not reliable enough, operates based on data that tends to favor one or a group of people or is intentionally distorted, those decisions are likely to cause serious consequences.
How to prevent?
The above risks are the reason why Elon Musk and technology experts are concerned about the uncontrolled development of AI and wrote a letter calling on the United Nations and international organizations to strictly control, ensure safety and ethics in AI development; strengthen international cooperation to ensure that AI technology is used beneficially for humans and society. Some specific measures can be identified as:
- Controlling system reliability: AI needs to be designed and developed to ensure safety and reliability through rigorous, regular, and continuous quality testing, control, and monitoring methods to ensure that the AI system operates according to human expectations and avoids unwanted effects from its lack of reliability.
- Artificial intelligence technology needs to be programmed to operate according to ethical values and social norms. Those rules and standards need to be clearly established and must be monitored to ensure that they do not do anything bad or harmful to people or organizations. The process of analyzing, assessing the impact and monitoring the operation of AI systems must also be regularly carried out to ensure that they operate properly, do not violate and erode the common values of the whole society, do not cause negative impacts and ensure the transmission of correct knowledge and values.
- A pause in the “training” of AI systems for at least six months to allow for a full reassessment. This pause must be public, controlled, and all key actors involved in the training must participate. If necessary, governments should impose a temporary moratorium. This does not mean a complete halt to AI development, but simply a pause in the dangerous race to other development models, making AI systems more accurate, safe, understandable, transparent, consistent, trustworthy, and loyal. This is a comprehensive solution that Elon Musk and his colleagues call for in their open letter, but it is not very feasible. It is very difficult for all independent laboratories to comply with such a non-binding and cross-national call.
- Strengthen cooperation and information sharing between countries and independent AI research organizations to ensure that AI is developed and used safely in the future. Cooperation and information sharing are also measures to promptly warn and prevent any risks that may arise in the future.
In general, AI has been present everywhere around us and stories like artificial intelligence writing poetry, writing music, writing articles have become reality and are no longer fiction. The author of this article himself has used artificial intelligence technology on computers and phones to assist, making his writing process more convenient and faster. Specifically, the chapter of this article and some ideas are suggested by AI and more specifically, all the photos used in this article are created by AI. Artificial intelligence technology is very useful in improving the quality of human work and life, but it needs to be tightly controlled so as not to go out of control and become a risk of replacing humans in the future as predicted by theoretical physicist Stephen Hawking.
Article and photos: HUU DUONGSource
Comment (0)