we just discovered that AI browsers can finish entire compliance modules without a single human touch. slides, quizzes, scenarios, all of it. just runs in the background. this breaks everything. if an AI can silently complete training on behalf of employees, our LMS completion records mean nothing in an audit or breach investigation. we can’t prove anyone learned anything. the bigger problem is we have zero visibility. our current stack can’t tell if it’s a person or an AI agent interacting with the training portal. complete blind spot. we’re rebuilding our whole approach for 2026 but idk what to do: video verification (destroys user experience and accessibility) custom forms needing internal knowledge to answer (huge content creation burden) image based hotspot assessments (probably temporary until AI catches up) what we really need is a way to: detect when browser automation or AI agents are being used during sessions get alerts when completion patterns look suspicious block automated tools from accessing the LMS entirely have audit logs that prove human participation has anyone found a solution that gives you visibility and control over what’s interacting with your training systems? feels like we need some kind of security layer sitting between users and the LMS but i don’t even know what category of product that would be. submitted by /u/Sufficient-Owl-9737
Originally posted by u/Sufficient-Owl-9737 on r/ArtificialInteligence
