Original Reddit post

The Anchoring Technique We’ve all heard of recency bias, but did you know it actually changes how the model weighs your instructions? If you have a massive block of text, the model is statistically more likely to be influenced by what’s at the very end. If your prompt is long, repeat your most critical instructions at the very bottom as a Cue it’s like a jumpstart for the output. Stop writing paragraphs, start building Components The pros don’t just write a prompt. They treat it like a sandwich with specific layers- Instructions, Primary Content and cues with Supporting content. Give the Model an Out (The Hallucination Killer) This is so simple but I rarely see people do it. If you’re asking the AI to find something in a text, explicitly tell it: “Respond with ‘not found’ if the answer isn’t present”. Few Shot is still King (unless you’re on O1/GPT-5) The docs mention that for most models, Few Shot learning (giving 2-3 examples of input/output pairs) is the best way to condition the model. It’s not actually learning, but it primes the model to follow your specific logic pattern. Apparently, this is less recommended for the new reasoning models (like the o-series), which prefer to think through things themselves. XML and Markdown are native tongues If you’re struggling with the model losing track of which part is the instruction and which is the data, use clear syntax like — separators or XML tags (e.g., <context></context>). These models were trained on a massive amount of web code, so they parse structured data way more efficiently than a wall of text. Since I’m building a lot of complex workflows lately, I’ve been using a prompt engine . It auto injects these escape hatches, delimiters and such. One weird space saving tip I found was in terms of token efficiency, spelling out the month (e.g., March 29, 2026) is actually cheaper in tokens than using a fully numeric date like 03/29/2026. Who knew? submitted by /u/madeyoulookbuddy

Originally posted by u/madeyoulookbuddy on r/ClaudeCode