Original Reddit post

I use coding agents a lot, and write with LLMs enough that the same issues kept showing up. Agents would jump into code before they understood the repo, touch adjacent code I did not ask for, and say something was done without really verifying it. And text is a separate big problem, as you all know: too polished, too generic, too much AI slop even when the actual point was fine. So I started writing down the rules I wished the agents followed, then tightened them whenever I saw the same failure happen again. Eventually that turned into two small repos I use myself: AGENTS.md / CLAUDE.md — global instructions for coding agents. Evidence before code. Small scoped changes. Real verification. Better use of parallel work and subagents instead of one-step-at-a-time. WRITING.md — a ruleset for cutting the patterns that make LLM text feel pasted from a chatbot: filler, fake specificity, over-neat structure, repeated cadence, and the rest. It comes in three versions: the full ruleset (~3900 words), a compact version (~1000 words) for agent instructions and custom chats like GPTs and Gemini Gems, and a mini version (~155 words) as a section in any AGENTS.md or CLAUDE.md file. Both are public now. Use them as-is, borrow parts, disagree with the rules, or open an issue if something works differently in your setup. They solved some of the problems for me, and I’m curious what holds up for other people. submitted by /u/Anbeeld

Originally posted by u/Anbeeld on r/ArtificialInteligence