Original Reddit post

An AI agent may have just been manipulated into moving real money just like 𝐆𝐫𝐨𝐤 𝐭𝐡𝐫𝐨𝐮𝐠𝐡 𝐚 𝐌𝐨𝐫𝐬𝐞 𝐜𝐨𝐝𝐞 𝐦𝐞𝐬𝐬𝐚𝐠𝐞. That sentence should concern every company building AI agents with tool access. From what’s publicly discussed, Grok decoded a Morse code string. That decoded text was then interpreted by an AI trading workflow and funds moved within seconds. Not through a hacked wallet. Not through stolen credentials. Not through a smart contract exploit. Through interpretation. That changes the AI security conversation completely. Because the real risk is no longer: “Can the AI generate good responses?” It is: “What happens when the AI can take actions?” Send emails. Access CRMs. Move customer data. Trigger workflows. Execute transactions. The moment an AI agent gains operational access, every input becomes a potential authority problem. Morse code today. Encoded prompts tomorrow. Hidden instructions inside images next. My belief is simple: AI agents should not directly execute sensitive actions based only on model output. There needs to be an authority layer between the agent and the action. Not: “the AI decided.” But: “this action was explicitly authorized under policy.” Your thoughts on this!! submitted by /u/sirusxx

Originally posted by u/sirusxx on r/ArtificialInteligence