Original Reddit post

As a Claude Code user, you might realize that OpenClaw did not invent “magical memory”. What it does well, in my opinion, is reorganize project, session, and memory around local agentic AI tools such as Claude Code, Codex CLI, and similar coding agents. Before using tools like OpenClaw, most AI coding tools were still shaped around two concepts: project and session. Project A project is basically the folder where the agent is allowed to read files, and sometimes edit or delete them depending on your permissions. For developers, this is natural. You cd into the right repo, start Claude / Codex / Cursor / another coding tool, and work from there. But this is already friction for many people. Even for technical users, there is a small mental tax every time: Which folder should I open? Is this the right project? Should I start a new conversation? Should I find an old session and continue from there? Sometimes you only want to ask something casually, but you still have to think like a filesystem operator first. Session A session is the continuous conversation, similar to a long ChatGPT or Gemini thread. The agent keeps the context inside that conversation, so the interaction feels natural while the session is alive. The weakness is obvious: when you start a new session, the new session does not automatically know what happened in the old one. You have to repeat yourself, or manually recover context. OpenClaw makes both concepts feel much less visible. When you install OpenClaw, you mostly care about one working folder. Technically, that folder is still a project. But the concept is softened. A non-technical user does not need to constantly think about which folder or repo they are inside. Then you connect OpenClaw to chat surfaces like Telegram, WhatsApp, Discord, Slack, etc. Under the hood, every chat still maps to some kind of session inside that workspace. But from the user’s point of view, it feels like talking to the same assistant. That routing is important. For example, all DMs from Slack and Telegram can point to the same personal assistant session. Or a Slack / Telegram group can map to one shared team session. The tradeoff is also clear: one shared group context feels continuous, but it can get messy if the group discusses too many unrelated things. See this Openclaw session doc if you want to deep dive. The next trick is memory OpenClaw adds a simple but powerful file-based memory structure into the workspace. The important files are usually things like: AGENTS.md for agent instructions MEMORY.md for long-term stable memory memory/YYYY-MM-DD.md for daily notes and recent context AGENTS.md is the kind of file coding agents already know how to read as workspace instructions. OpenClaw uses that familiar mechanism in a clever way: it tells the agent, very explicitly, that memory should be written into files instead of kept as a vague mental note inside one conversation. Here is an excerpt about memory from OpenClaw’s AGENTS.md

Write It Down - No “Mental Notes”! - Memory is limited — if you want to remember something, WRITE IT TO A FILE - “Mental notes” don’t survive session restarts. Files do. - When someone says “remember this” → update memory/YYYY-MM-DD.md or relevant file - When you learn a lesson → update AGENTS.md, TOOLS.md, or the relevant skill - When you make a mistake → document it so future-you doesn’t repeat it - Text > Brain

This is not mysterious. It is closer to a second-brain habit for agents: write things down, because the model will not remember them reliably by itself. But that small rule changes the behavior of the whole system. Now, even if a new conversation starts, the assistant can reload selected context from files. It does not truly “remember” in a human sense. It persists useful context in a workspace, then learns to read and update that workspace over time. That is why I think OpenClaw’s real contribution is not a new kind of memory. It turns several separate things in AI coding CLIs: project session tool access memory files chat surface into one more continuous assistant experience. Non-technical users do not need to understand which folder or session is active. Technical users still keep the power of local agents: file context, tool execution, code editing, command line access, and explicit memory. That insight is also why I started building clisbot I looked at the current ecosystem and asked myself: Anthropic, OpenAI, and Google are already spending enormous resources building very strong agentic CLI tools. Claude Code, Codex CLI, Gemini CLI, and similar tools are already good. So why rebuild the whole agent stack again if we can operate those tools directly? So I tried something simple: take the workspace / memory pattern that makes OpenClaw useful, then run existing CLI coding agents inside it. Claude, Codex, Gemini. Same basic idea. It worked better than I expected. In some cases, it even felt more predictable for my workflow , because I could use the native CLI agent directly, with its own config, skills, MCP setup, slash commands, subscription model, and behavior I was already familiar with. That became clisbot. The goal is not to compete with OpenClaw on the idea of “memory”. I think OpenClaw got a lot of the product direction right. The goal is to make the same kind of durable assistant experience available on top of the CLI agents people already use: Claude / Codex / Gemini as the actual agent runtime Slack / Telegram as the chat surface a local workspace as the assistant’s home AGENTS.md / MEMORY.md / daily memory files as persistent context queue and loop commands for longer autonomous workflows personal and team assistant modes permission and route controls for practical team use For developers who already live in Claude Code or Codex CLI, this feels very different from using a separate assistant that then tries to call another coding agent indirectly. It feels closer to home: you still control the workspace, config, tools, skills, and memory files, but you can talk to the agent from Slack or Telegram like a normal assistant. It is also lighter to install and easier to update, because the strongest agent layer is still the CLI tool you already trust. And more importantly, it allows you to use the AI subscription / CLI setup you already have, and turn it into both a personal assistant and a coding assistant, without adding another API-heavy agent layer. Repo: https://github.com/longbkit/clisbot The new release this week also improves stability and the bot-native experience quite a lot. After this version, you can talk to the bot for almost everything: ask it to update itself to the latest version, create recurring schedules, use queue workflows (with skill in the repo) to improve output quality with the skill included in the repo, or use /new to reset the conversation when the context becomes too noisy. Thank you. submitted by /u/longbkit0811

Originally posted by u/longbkit0811 on r/ClaudeCode