Original Reddit post

Think about what most cloud-based AI DLP solutions do. Employee pastes sensitive data into ChatGPT. Your security tool intercepts it, sends it to another cloud service for classification. That cloud service now has yr sensitive data too. You solved a data leak by… creating a second data leak. Make it make sense. If we’re serious about preventing AI data leakage, the analysis needs to happen locally. On device, in the browser, at the moment of action. Data should never leave, no third party should ever see it, no cloud round-trip. submitted by /u/shangheigh

Originally posted by u/shangheigh on r/ArtificialInteligence