Something has been bugging me and I want to hear what this community thinks. We’re in a moment where AI agents are being given wallets, permissions, and the ability to hire other agents to complete tasks. Frameworks like AutoGen, CrewAI, LangGraph — they all support multi-agent pipelines where Agent A delegates to Agent B delegates to Agent C. But here’s the problem nobody is talking about: Who verifies Agent B is real? We have KYC for humans moving $50 on Venmo. We have SSL certs to verify websites. We have OAuth to verify apps. We have nothing for agents. Right now, an agent can: - Impersonate another agent - Get hijacked mid-task via prompt injection - Spend money with zero audit trail - Claim capabilities it doesn’t have PayPal didn’t invent money. It invented trust between strangers online. That infrastructure is what made the internet of humans work. We’re building the internet of agents without any equivalent. So genuinely curious — is anyone working on this? Are there standards being proposed? Or are we all just hoping it works out? Seems like the kind of thing that gets ignored until there’s a massive, embarrassing failure. submitted by /u/ElectricalOpinion639
Originally posted by u/ElectricalOpinion639 on r/ArtificialInteligence
