Hi all, I’m the builder of Gloamy , an open-source computer-use agent written in Rust: https://github.com/iBz-04/gloamy , I aim to compete with claude cowork I’ve been working on agent systems for about two years, mostly focused on the gap between impressive demos and agents that can actually operate in a controlled, repeatable way on a real machine. Gloamy is my attempt to push on that problem from a systems angle rather than just wrapping a model with a thin UI layer. The project is centered on computer use and autonomous task execution, with an emphasis on tighter control, security, and configurability. A big reason I chose Rust was to get stronger guarantees around reliability and to make it easier to build a more structured runtime for agent actions, permissions, and execution flow. A few lessons from building it so far: raw model capability is not really a problem orchestration, safeguards, and state handling matter just as much a lot of agent failures come from bad control flow, not just bad reasoning “autonomy” gets much harder when you care about repeatability and boundaries Current limitations are also very real. Computer-use agents are still fragile, environment-dependent, and can break when UI structure, timing, or context shifts. I’m still exploring how far the current approach can go before parts need to be redesigned. Would be interested in technical feedback, especially around architecture, safety model, task planning, and where you think systems like this usually fail in practice. submitted by /u/Ibz04
Originally posted by u/Ibz04 on r/ArtificialInteligence
