For every 10,000 enterprise users, an enterprise organization is experiencing approximately 183 incidents of sensitive data being posted to ChatGPT per month, according to Netskope.
Based on data from millions of enterprise users globally, researchers found that generative AI app usage is growing rapidly, up 22.5% over the past two months, amplifying the chances of users exposing sensitive data.
ChatGPT sees more than 8 times as many daily active users as any other generative AI app.
Over the past two months, the fastest growing AI app was Google Bard, currently adding users at 7.1% per week, compared to 1.6% for ChatGPT. At current rates, Google Bard is not poised to catch up to ChatGPT for over a year, though the generative AI app space is expected to evolve significantly before then, with many more apps in development.
Other sensitive data being shared in ChatGPT includes regulated data- including financial and healthcare data, personally identifiable information – along with intellectual property excluding source code, and, most concerningly, passwords and keys, usually embedded in source code.
Blocking access to AI related content and AI applications is a short term solution to mitigate risk, but comes at the expense of the potential benefits AI apps offer to supplement corporate innovation and employee productivity.