Disclosure: I’m part of the team that built Computer Agents . The product is an agentic compute platform: instead of giving an AI agent only a chat history, we give it a persistent cloud computer, files, terminal/browser access, project tasks, and saved execution history. The problem we were trying to solve: A lot of agent demos work because the task is short. Real work is messier. The agent needs to inspect files, install dependencies, run commands, fail, read logs, patch something, save outputs, and continue later. If the environment disappears every time, the agent repeats setup work and loses useful state. Our technical approach: Each agent run can happen inside an isolated cloud environment Files and execution context persist across sessions Work can be organized into projects/tasks instead of one long chat Tasks can have reviewers, artifacts, logs, comments, and status Developers can use Python/TypeScript SDKs to create agents, threads, computers, projects, schedules, and webhooks Model routing is separated from workspace execution, so cheaper models can do high-volume steps like triage or repo search while stronger models handle harder reasoning/review What worked better than expected: The biggest reliability improvement was not always a smarter model. It was giving the agent a stable workspace and a narrow ticket. “Fix this vague product area” performs poorly. “Reproduce this bug, don’t touch auth, run this command, summarize changed files” performs much better. Limitations: I would not trust agents to merge production changes unsupervised Broad refactors still need human review Security-critical changes need strict permissions Parallel agents need file/task isolation or they create review chaos Demo/docs: https://computer-agents.com/ I’m curious how others here are handling persistence for agents: do you keep state in chat history, vector memory, workflow graphs, containers, git worktrees, or full cloud workspaces? submitted by /u/docgpt-io
Originally posted by u/docgpt-io on r/ArtificialInteligence
