Big Tech’s massive investments in artificial intelligence (AI) and chatbots have created headaches for corporate information security leaders. According to CNBC , AI is permeating the workplace, and information security leaders need to approach the technology with caution and prepare defenses.
The technology behind GPT is called a large language model (LLM). Not every company has its own GPT, so it’s important to monitor how employees use it. People will use generative AI if they find it useful, says Michael Chui, a partner at the McKinsey Global Institute. He compares it to using a computer or phone at work.
Even when prohibited, employees will still find ways to sneak in. Historically, there have been technologies that were so compelling that people were willing to pay for them. For example, they bought cell phones before their companies provided them with personal devices, according to Chui.
CISOs, already dealing with the threat of cyberattacks, now have another worry with AI and GPT. According to Chui, companies can license an AI platform to monitor what employees are revealing to chatbots and ensure the information they share is protected.
“As a business, you don’t want your employees to prompt confidential information to a public chatbot,” Chui says. So it’s important to put technical measures in place, such as licensing software and signing legal agreements about where your data goes.
A safe option, says Sameer Penakalapati, CEO of Ceipal, is for a company to develop its own GPT or hire GPT developers to create its own version. If you create your own GPT, the software will have exactly the information that the company wants its employees to have access to. They can also protect the information that employees input. Regardless of which solution you choose, you need to be careful about the data that goes into the GPT.
(According to CNBC)
Source
Comment (0)