Original Reddit post

I was completely unaware of the recent unrest and complaints regarding the performance of Claude Code, and while there are interesting theories as to why this is, I believe that the much more plausible explanation is that the models are getting better at following instructions. The ability of the model is finite regardless of how good it is at following instructions, and so it comes down to simple principles: A model that ignores a user’s instructions works for poor programmers. A model that follows a user’s instructions works for good programmers. This principle is of course not limited to programming. If you prefer a model that you can vaguely prompt, and if you’re not concerned with its ability to precisely follow instructions, then of course you will be disappointed when the models get better at following instructions and simultaneously require more precise instruction to achieve good results. This favors the expert, as they can provide explicit instruction and understand the tradeoffs that the model will encounter, or more likely, they will be willing to address the issues when they are reached. In conclusion, my advice to model providers is to abandon the idea of “one model for everyone”, and my advice to anyone struggling to get results with Opus 4.6 is this: do better. submitted by /u/Immediate-Brush5944

Originally posted by u/Immediate-Brush5944 on r/ClaudeCode