![]() |
UNESCO calls for strict regulations to govern the use of artificial intelligence (AI) in the classroom. (Photo: Linkedin) |
In its latest guidance to governments , UNESCO stressed that national authorities are still unprepared to address issues related to implementing generative AI programs in schools.
The agency said that using AI programs instead of teachers could affect children’s emotions and put them at risk of being manipulated. The UNESCO guidelines state that AI tools have the potential to help children as research assistants, but they can only be safe and effective if governments regulate their use and teachers, students, and researchers are involved in the design of the tools.
In fact, AI is changing the world and facilitating daily life. Human work in all fields is significantly reduced thanks to AI applications.
However, there are growing warnings about the risks posed by AI. This technology is said to be starting a new, complex and fierce battle between cybersecurity forces and cybercriminals. Children are the beneficiaries of the opportunities brought by AI, but also the most vulnerable to the dangers of this groundbreaking technology.
As AI becomes more prevalent globally with investments set to boom in 2023, lawmakers around the world have been urgently considering how to mitigate the risks posed by this emerging technology to national security.
In November, the UK government will host a global summit on AI safety, focusing on how to prevent AI from being used to spread misinformation, fake news in elections and the use of this technology in war.
Last May, leaders of the Group of Seven (G7) leading industrialized nations called for global standards to develop AI technology safely and reliably.
The rapid development of AI has posed a difficult problem for authorities in many countries: how to balance promoting innovation and creativity with controlling the potential risks of this technology. Despite the benefits of AI products, many technology companies still warn about the level of danger of this technology if not placed under close supervision.
Microsoft President Brad Smith recently stated that AI has the potential to be a useful tool, but it also has the risk of becoming a weapon against humanity if it gets out of human control. Mr. Brad Smith affirmed that technology companies need to be encouraged to do the right thing, including creating new regulations and policies to ensure safety in all situations.
OpenAI CEO Sam Altman also warned of the potential dangers of AI and emphasized the need to reduce those dangers. Four major technology companies including Anthropic, Google, Microsoft and OpenAI have formed a new group called the Frontier Model Forum to build safety standards, aiming at the main goals such as: Promoting safe AI research to support development, reduce risks; helping the public understand the nature, capabilities, limitations and impacts of technology; collaborating with policymakers and scholars to share knowledge about risks and safety...
Using AI is an irreversible trend in the world. Building development strategies so that AI truly serves human life is a necessary step to prepare future generations to live safely with artificial intelligence.
Source
Comment (0)