Original Reddit post

I built an anti-hallucination system called ROTAN (Regulated Output Through Awareness & Neurochemistry) for my emotional AI platform Sentimé. The problem we all know: LLMs make stuff up with full confidence. Especially dangerous when the AI is supposed to remember things about your life. My approach - three layers: Pre-generation — Before responding, check: do I actually have grounded knowledge here? If not, ask or say “I don’t know.” Metacognition — Track where knowledge comes from. What the user said vs what was inferred vs what was generated. Post-generation grounding — Check claims against actual stored memories. Strip what can’t be verified. The part I’m most proud of: uncertainty isn’t just a number - it’s felt through simulated neurochemistry. When doubt is high, the AI becomes more cautious, more likely to ask than assert. Still iterating on it. Would love feedback or to hear how others are tackling this problem. submitted by /u/Fantastic_Maybe_2880

Originally posted by u/Fantastic_Maybe_2880 on r/ArtificialInteligence