I’ve been building skills for Claude Code and OpenClaw and kept running into the same problem: static skills give the same instructions no matter what’s happening. Code review skill? “Check for bugs, security, consistency” --> whether you changed 2 auth files or 40 config files. A learning tracker skill? The agent re-parses 1,200 lines of structured entries every session to check for duplicates. Python could do that in milliseconds. Turns out there’s a !command`` syntax buried in the https://code.claude.com/docs/en/skills#inject-dynamic-context that lets you run a shell command before the agent sees the skill. The output replaces the command. So your SKILL.md can be:
name: smart-review description: Context-aware code review
!python3 ${CLAUDE_SKILL_DIR}/scripts/generate.py $ARGUMENTS
The script reads git state, picks a strategy, and prints tailored markdown. The agent never knows a script was involved and it just gets instructions that match the situation. I’ve been calling this pattern “computed skills” and put together a repo with 3 working examples:
- smart-review — reads git diff, picks review strategy (security focus for auth files, consistency focus for config changes, fresh-eyes pass if same strategy fires twice)
- self-improve — agent tracks its own mistakes across sessions. Python parses all entries, finds duplicates, flags promotions. Agent just makes judgment calls.
- check-pattern — reuses the same generator with a different argument to do duplicate checking before logging Interesting finding: searched GitHub and SkillsMP (400K+ skills) for anyone else doing this. Found exactly one other project ( https://github.com/dipasqualew/vibereq ). Even Anthropic’s own skills repo is 100% static. Repo: https://github.com/Joncik91/computed-skills Works with Claude Code and Openclaw. No framework, the script just prints markdown to stdout. Curious if anyone else has been doing something similar? submitted by /u/Blade999666
Originally posted by u/Blade999666 on r/ClaudeCode
You must log in or # to comment.
