Original Reddit post

so, like, there’s this whole thing about getting better stuff from ai, right? it’s not just about what you ask, but how you think about it. personally, i just don’t trust ai out of the gate. it needs to earn it, you know. most folks, they just ask a question, copy-paste the answer, and, poof, they’re done. that’s how you end up with all sorts of made-up facts in your work, your content, or even worse, in a client’s project. i’ve got this five-prompt process, kind of forcing the ai to really think before i just blindly accept what it says. first up, the self-check: “what could even be wrong with this? what are you least sure about? where might you just be making things up?” like, ai doesn’t just fact-check itself, so you gotta push it. this one trick actually flags a lot of the bad stuff. then there’s the reasoning test: “walk me through your thinking, step by step. point out every assumption you’re making.” because if it can’t explain WHY, that’s usually where the made-up bits are hiding. next, a role switch: “imagine you’re a senior consultant, and your job is to find all the flaws. be super harsh.” it’s wild, but it’s the same ai, same model, just a completely different quality of answer. after that, the opposition test: “give me the absolute strongest argument against everything you just told me.” if you only get one side from ai, you’re not really thinking, you’re just nodding along. and finally, the weekly audit: “here’s what i asked ai this week. which of these answers were probably wrong?” ai messes up in patterns, so figuring out YOUR patterns helps you know where to really double-check. after trying out and using this framework once, started seeing a huge difference in how good and reliable the ai’s output is. submitted by /u/MomentInfinite2940

Originally posted by u/MomentInfinite2940 on r/ArtificialInteligence