Original Reddit post

Hey everyone Lately I’ve been wondering if security robots, especially ones using AI for perception and decision making, could be outsmarted by clever people over time. Most AI systems work well in controlled settings but can misinterpret unexpected or unusual behavior, which seems like a real problem in messy real world environments. From your experience, how robust are current AI models against someone deliberately trying to confuse them in crowded spaces? Are there ways to make these systems more reliable without relying entirely on human oversight? I’ve also thought about how accessible this tech could become through stores like Newegg, Best Buy, and other global marketplaces including Alibaba, but I would definitely seek guidance before depending on these platforms for something as critical as security. In real life, have you seen security robots fail in unusual ways or get tricked? Any papers or resources about adversarial attacks in the physical world would be really helpful. Curious what you all think submitted by /u/Waltace-berry59004

Originally posted by u/Waltace-berry59004 on r/ArtificialInteligence