Was wiring token tracking into Sentience Governor and ran into something that’s been bothering me. If one LLM reasoning step produces three tool calls, and your observability stack attributes the same token spend to all three events, your downstream analytics are mathematically wrong. Not slightly wrong. Structurally wrong. Concrete example from a single agent session I ran: Naive event-level aggregation: 14,436 prompt tokens Attributed correctly at the reasoning-step level: 4,812 prompt tokens A 3x overstatement, silently, on one workflow The fix is straightforward: every reasoning step needs an identity (we use llm_turn_id ), and token spend attaches to the step, not to each downstream tool call. Aggregation becomes dedupe-safe by construction. What’s been bothering me more is the second-order implication. In non-deterministic agent systems, the normal ways we think about correctness start breaking down. One of the things that starts replacing it is cost. Retries cost money. Loops cost money. Reasoning drift costs money. Every operational pathology shows up, eventually, in tokens. Which means cost stops being just billing telemetry and becomes one of the few accountability surfaces that survives non-determinism. But only if the attribution is structurally correct. Otherwise you’re not measuring agent behavior. You’re measuring an artifact of how your trace events were aggregated. Curious whether others are also starting to read cost as a behavioral signal rather than just billing, or if I’m reading too much into a single workflow.We found a 3x token attribution distortion in a single agent workflow submitted by /u/rohynal
Originally posted by u/rohynal on r/ArtificialInteligence
