Original Reddit post

A thought I can’t shake: People are getting used to installing random AI tools, agent frameworks, browser-use tools, local assistants, automation wrappers, and experimental apps with very little hesitation. And honestly, that changes the threat model. A strange installer used to be a red flag. Now if it looks polished enough and calls itself an AI tool, people seem far more likely to assume it’s innovative rather than suspicious. That feels dangerous…Not because the malware itself is necessarily new, but because the AI category has normalized weird permissions, unusual install steps, and “just trust it, it’s experimental” UX. At some point, “AI” stops being just a product label and starts becoming a social-engineering advantage. Does this feel like a real emerging security problem to anyone else? submitted by /u/Individual-Gas5276

Originally posted by u/Individual-Gas5276 on r/ArtificialInteligence