By analyzing the usage of ChatGPT and other generative AI apps among 10,000 employees, the report has identified key areas of concern. One alarming finding reveals that 6% of employees have pasted sensitive data into GenAI, with 4% engaging in this risky behavior on a weekly basis. This recurring action poses a severe threat of data exfiltration for organizations.
The report addresses vital risk assessment questions, including the actual scope of GenAI usage across enterprise workforces, the relative share of “paste” actions within this usage, the number of employees pasting sensitive data into GenAI and their frequency, the departments utilizing GenAI the most, and the types of sensitive data most likely to be exposed through pasting.
The research also highlights the prevalence of sensitive data exposure. Of the employees using GenAI, 15% have engaged in pasting data, with 4% doing so weekly and 0.7% multiple times a week. This recurring behavior underscores the urgent need for robust data protection measures to prevent data leakage.
Source code, internal business information, and Personal Identifiable Information (PII) are the leading types of pasted sensitive data. This data was mostly pasted by users from the R&D, Sales & Marketing, and Finance departments.