Specifically, Google's parent company Alphabet has advised employees not to input their confidential documents into AI chatbots. According to sources within Alphabet itself that Reuters obtained, the company also warned its engineers to avoid directly using the computer code that chatbots can generate.
Tech giant Google is also worried about information leaks through its own AI chatbot. Photo: Reuters
Chatbots, including Bard and ChatGPT, are programs that use so-called general artificial intelligence to conduct human-like conversations with users. And experts say chatbots can replicate the data they absorb as they respond, creating a risk of leaking user information.
The concerns show how Google wants to avoid business harm from software it is launching to compete with ChatGPT. Google’s caution also reflects a growing global trend of fear, including companies warning against disclosing information on chatbots.
A growing number of businesses around the world have put safeguards in place on AI chatbots, including Samsung, Amazon and Deutsche Bank.
About 43% of tech professionals were using ChatGPT or other AI tools as of January, often without telling their superiors, according to a survey of nearly 12,000 people by US-based website Fishbowl.
By February, Google had told its Bard chatbot employees not to share internal information, Insider reported. Google is now rolling out Bard to more than 180 countries and in 40 languages.
Google's AI technology is being applied to tools that automatically compose emails, documents, and even create software. However, these tasks can contain false information and also pose a risk of information leakage.
Bui Huy (according to Reuters, Insider)
Source
Comment (0)