In its newly released update to its AI (artificial intelligence) principles, Google's parent company, Alphabet, discussed how it will use AI in the future. Most notably, the promise not to use AI to build weapons, surveillance tools, or "technologies that could cause general harm" has been removed.
Google's red line regarding AI
The term "red line" regarding Google's AI began to emerge in 2018, when company employees protested the Maven AI project, a collaboration with the US Department of Defense . At that time, more than 4,000 Google employees signed a petition to terminate the project and demand that the company never "build technology for war."
Google subsequently did not renew its contract to build AI tools for the Pentagon. The company also drew a red line, declaring that "non-pursuant applications" related to AI, including weapons and technologies for collecting or using information for surveillance, violate internationally accepted norms. Technologies that cause or are likely to cause general harm, or that violate widely accepted principles of international law and human rights, would also not be accepted by Google.
Some Google apps on iPhone
Google's decision to draw a red line between AI and weapons has prevented it from participating in military agreements signed by other tech giants, including Amazon and Microsoft.
However, faced with the dramatic changes in the AI race, Google decided to retract its promise. This caused significant controversy, not only within Google but also marked a crucial shift for Silicon Valley tech companies in the defense industry.
Google is divided internally.
According to Business Insider , the update to Google's AI ethics and principles sparked a strong backlash from employees. Company staff expressed their frustration on internal bulletin boards. A meme depicting CEO Sundar Pichai questioning Google's search engine with the question "how do I become an arms contractor?" received significant attention.
Another employee created a meme implying, "Are we the bad guys for lifting the ban on using AI for weapons and surveillance?" The company currently has over 180,000 employees. There may still be many voices supporting Google's decision to work more closely with the US government and its military and defense clients.
Google's reasoning
A Google spokesperson has not officially commented on the withdrawal of the "AI promise." However, the head of AI, Demis Hassabis, said that guidelines are being improved in a volatile world and that AI will "protect national security."
In a company blog post, Hassabis, along with James Manyika, Google's senior vice president of technology and social affairs, stated that as global competition for AI leadership intensifies, Google believes AI should be driven by freedom, equality, and respect for human rights.
They added: "We believe that companies, governments, and organizations that share values can collaborate to create AI that can protect people, drive global growth, and support national security."
Two Google executives stated that billions of people are using AI in their daily lives. Artificial intelligence has become a versatile technology, a foundation that countless organizations and individuals use to build applications. AI has shifted from a specialized research subject in laboratories to a ubiquitous technology like mobile phones and the internet. Therefore, Google's "AI oath" from 2018 also needs to be updated accordingly.
Alphabet says it plans to spend $75 billion next year, primarily on building AI capabilities and infrastructure.
Source: https://thanhnien.vn/google-rut-lai-loi-hua-khong-dung-ai-cho-quan-su-185250206161804981.htm






Comment (0)