Original Reddit post

Hi guys, sorry for being quiet. Been super tied up. almost one month ago I posted about v 3.3 … A lot has happened since. Here’s what shipped in v3.3.1 through v3.6.0. First problem this time: AI CLI tools don’t talk to each other Like many of us, I use Claude Code, Kimi CLI, and OpenAI Codex CLI. They all support skills (markdown instruction files), hooks (scripts that run on events), and slash commands. But they each have their own config directory and format. Skills I wrote for Claude don’t exist for Kimi. Hooks in settings.json don’t work in Codex (which uses config.toml). So I was maintaining three separate setups. v3.6 fixes this. One install, all three tools get the same skills, hooks, and project instructions. Here is everything that’s shipped, starting from 3.3.1 (am skipping housekeeping updates): v3.3.1 — Post-Compaction Task Restoration The two-layer defense I posted about last time. PreCompact hook tells the summarizer what to keep (Layer 1, best-effort), then a PreToolUse hook re-injects the full checkpoint on the first tool call after compaction (Layer 2, guaranteed). Layer 2 doesn’t depend on the summarizer at all — it reads from disk. I find this super useful esp coz it saves my context better than claude’s own compactness. v3.4.0 — Skill Quality Gates Built a linter for skills. 20 check rules across 4 categories: frontmatter validation, spec checks, content quality (detects ASCII art bloat, vague phrases, filler), and cross-reference validation. CLI outputs text or JSON, supports inline suppression (<!-- skill-lint: disable=SP002 -->). Added CI workflows for automated lint + Tessl skill review on PRs. Also added 18 behavioral eval scenarios across 15 skills. I think this is important. Results: Humbling.Scan results on 59 skills: 1 error (mnemos missing frontmatter), 85 warnings (19 skills over 500 lines, 30+ with ASCII art). 3 clean. quite humbling :) v3.5.0 — Maggy (AI Engineering Command Center) This one’s big. I will write a separate post on this. Optional extension that turns your issue tracker into an AI-prioritized inbox with one-click code execution. Okay this was added quite a few weeks before OpenAI Symphony but they get more airtime :)

  • Local FastAPI + vanilla JS dashboard (no build step) - GitHub Issues + Asana providers (Linear stubbed) - AI-ranked inbox with 30-min SQLite cache - One-click TDD execute: spawns claude -p locally with iCPG context pre-injected - Competitor discovery + RSS + Google News daily briefing - Config-driven (~/.maggy/config.yaml) — no hardcoded org IDs I also started adding multiple review passes so you will notice that this release also went through three full review passes (self-review, CodeRabbit, Copilot) which is why the changelog is enormous. It’s a bit of overkill but I’m experimenting. Finally, last but not the least: v3.6.0 — Cross-Tool Compatibility + Cross-Agent Intelligence This is a big one. Two parts: Part 1: Cross-Tool Compatibility One install.sh run, all three tools have the same 61 skills, the same hooks, and the same project instructions in their native format. - detect-agents.sh — finds which AI CLIs you have installed - install-skills.sh — reusable skill copier for any target directory - install.sh auto-detects Kimi and Codex and installs all 61 skills to ~/.kimi/skills/ and ~/.codex/skills/ - AGENTS.md template for Codex (mirrors CLAUDE.md with .agents/skills/ paths) - config.toml template with hooks in TOML format (Kimi/Codex don’t read settings.json) - convert-hooks-to-toml.sh — converts JSON hooks to TOML - /sync-agents command for on-demand cross-tool config sync - /initialize-project now asks “Which AI CLI tools do you use?” with auto-detection Part 2: Cross-Agent Intelligence I’ve wanted to do this is for a while and so this is where it gets interesting. The tools don’t just share config — they collaborate. Codex Auto-Review (Stop Hook): After tests pass, Codex automatically reviews your diff for critical bugs and security issues. It’s a Stop hook that runs between TDD and iCPG recording. Truncates diff to 8K chars to prevent token overflow. Exit 0 = clean, Exit 2 = findings feed back to Claude for fixing. Gracefully skips if Codex isn’t installed. Stop hook order:
  1. tdd-loop-check.sh → tests pass? 2. codex-auto-review.sh → Codex reviews diff 3. icpg-stop-record.sh → record symbols 4. mnemos-checkpoint.sh → save memory Kimi Delegation: Claude checks iCPG blast radius before starting any task. If <=3 files: Claude saves context via mnemos checkpoint, runs kimi --print -y -p “…” headless with the context + task injected, reads results back. If 4-8 files: asks the user. If 9+: handles it directly. The user doesn’t run anything — Claude orchestrates everything. This is because I’m trying to squeeze max performance and in my current setup I can do max 10M tokens per day, I want to combine claude +codex+kimi to max it out. Context transfer uses structured state (mnemos checkpoints + iCPG constraints), not raw conversation history. Kimi gets goals, constraints, file lists, git state, and the specific task. Not 50K tokens of back-and-forth. iCPG + Mnemos everywhere: All three tools run the same pre-task queries (icpg query prior, icpg query constraints, icpg query risk) and the same memory lifecycle (mnemos add goal, mnemos checkpoint). Same hooks, same skill, same behavior regardless of which CLI you’re in. New skill: cross-agent-delegation/SKILL.md teaches any agent when and how to delegate, including tool detection, blast radius rules, and the full 10-step orchestration workflow. Repo: github.com/alinaqi/claude-bootstrap Install: git clone, ./install.sh, then /initialize-project in any Claude Code session. If you’re using Kimi or Codex alongside Claude, the cross-tool stuff should save you a lot of duplicate config work. If you’re only using Claude, everything still works the same — the multi-tool features are additive, not required. Let me know what you guys think? submitted by /u/naxmax2019

Originally posted by u/naxmax2019 on r/ClaudeCode