Been building this for a while. Two releases shipping same day. TigrimOS v1.1.0 — Mac and Windows, standalone app with a built-in Ubuntu sandbox. No Docker, no cloud dependency. Tiger CoWork v0.5.0 — Linux native. Same feature set, no VM overhead. Designed to run directly on servers. The headline feature: Remote Agents Each TigrimOS instance already runs its own internal agent swarm. In v1.1.0 those swarms can talk to each other across the network. The interesting part is it’s not just node-to-node — it’s swarm-to-swarm. Machine A (laptop) Machine B (cloud GPU) ┌───────────────────┐ ┌───────────────────┐ │ Agent 1 │ │ Agent 4 │ │ Agent 2 ──── Orchestrator ────── Agent 5 │ │ Agent 3 │ │ Agent 6 │ └───────────────────┘ └───────────────────┘ Orchestrator reads persona + responsibility of each remote node, picks the right swarm for the job, and delegates the whole task. That swarm handles it internally. Agents on different physical machines communicate exactly like they’re on the same box. This also closes the obvious weakness of running a VM on a constrained desktop — you can attach a proper cloud GPU node for heavy inference, a database server for large-scale retrieval, and keep your laptop as the coordinator. Mix and match however makes sense for your workload. Governance — four protocols, pick per job This is the part I find most interesting architecturally. Not one-size-fits-all. 👑 Star/Hub — single orchestrator, agents execute. Deterministic, no negotiation. Good for well-scoped tasks where you want predictable output 📋 Blackboard — orchestrator posts tasks, agents bid based on skill and availability, best fit wins. Classic distributed auction. Good for mixed-specialty teams 🔄 Pipeline — sequential handoff between agents. A finishes, passes to B. Good for structured workflows: research → draft → review → deliver 🕸️ Mesh — fully decentralized, any agent delegates to any other directly. No central authority. Good for open-ended research or creative tasks that benefit from multiple perspectives 📢 Bus — broadcast to all agents simultaneously, whoever can handle it picks it up. Good for parallelizable workloads Each topology is configurable per session. You’re not locked into one governance model for the whole system. Other things worth knowing ∙ Each agent can have a different LLM backend — mix Claude Code, Codex, GLM, Minimax, local Ollama, whatever makes sense per role ∙ Sandbox isolation by default — agents cannot touch the host filesystem unless you explicitly mount a folder ∙ Long-running sessions supported with checkpoint recovery and context compression ∙ MCP server integration for external tooling ∙ Minecraft-style task monitor shows live agent activity with inter-agent interactions (sounds gimmicky, actually useful for debugging multi-agent flows) Upgrading from v1.0.0 — no VM rebuild needed, SSH in and run a few commands. Still early. Would genuinely appreciate feedback from anyone running multi-agent workflows — especially on the governance side, curious what topology people end up reaching for most. Repo link https://tigrimos.github.io/ submitted by /u/Unique_Champion4327
Originally posted by u/Unique_Champion4327 on r/ArtificialInteligence
