Original Reddit post

Been spending a lot of time in r/AI _Agents and r/ArtificialInteligence since launching our Governor module, and I keep noticing the same thing: Different teams describe the same operational pain using completely different vocabularies. Some call it observability. Some call it drift. Some call it logging. Some call it debugging. Some call it performance. But underneath all of them is the same gap: The agent did something different from what the operator believed, expected, or intended. What’s becoming clearer to me is that a lot of the industry is trying to force deterministic behavior onto fundamentally non-deterministic systems. That feels like the wrong target. You probably can’t make execution deterministic. You probably can deterministically understand intent. Curious if others building/running agents are seeing the same pattern. submitted by /u/rohynal

Originally posted by u/rohynal on r/ArtificialInteligence