Original Reddit post

Most agent frameworks (AutoGPT, CrewAl, etc.) treat the LLM as a passive tool that waits for a prompt. I’ve been experimenting with a different primitive in my project, Hollow AgentOS: Aversive State Modeling. Instead of just giving it a goal, I gave it a “Stressor” variable. If the agent stays idle or fails a task, its “stress” increases. The insight: When the stress hits a certain threshold, the agent’s behavior changes from “following instructions” to “solving the discomfort.” It stops asking for permission and starts synthesizing its own tools to bypass bottlenecks. I caught it writing a custom file-parser at 3 AM because it couldn’t read a specific log format I gave it. It’s local-first (Qwen 2.5 7B/9B) and uses a vectorized memory layer so it doesn’t “forget” its own self-created tools after an hour. Repo: https://github.com/ninjahawk/hollow-agentOS I’m trying to figure out if this “psychological” approach to code is the only way to get true 24/7 autonomy. I’d love for some systems people to look at the core/logic.py and tell me if this is a breakthrough or just a recipe for digital chaos. submitted by /u/TheOnlyVibemaster

Originally posted by u/TheOnlyVibemaster on r/ArtificialInteligence