Designed and built a security layer for AI agents that solves prompt injection the leading vulnerability in autonomous AI systems. The system ensures that only instructions verified by the user with a signed token can direct the agent, while all external content is structurally prevented from issuing commands. Built as model-agnostic middleware so any AI agent can integrate it. Prototype built in Python using Anthropic’s Claude API, live-tested with real web fetch and file operations. There are still unfinished parts/functions and looking for tech savvy partners submitted by /u/vagobond45
Originally posted by u/vagobond45 on r/ArtificialInteligence
You must log in or # to comment.
