YouTube player

Data security service, Cyberhaven, recently release a report stating  that sensitive data makes up 11% of what employees paste into ChatGPT and that the average company leaks sensitive data to ChatGPT hundreds of times each week. In one case, an executive copied the firm’s 2023 strategy document into ChatGPT and asked it to create a Powerpoint presentation. In another, a doctor inputted the patient’s name and details of their condition to ask ChatGPT to craft a letter for the patient’s insurance company. The security concern here is that sensitive data ingested into ChatGPT could resurface when prompted by the right queries. Some companies have taken action by either restricting the use of ChatGPT or issuing warnings to take care when using generative AI services.

This segment was created for the It’s 5:05 podcast