Original Reddit post

One thing I keep noticing with AI tools is that even when the answer sounds correct, people still open Google or another AI to verify it anyway — especially for coding, finance, legal, medical, research, or anything high-stakes. A lot of models are good at sounding confident, but they can still: hallucinate sources misrepresent articles leave out nuance OR double down when wrong So I’ve been thinking about this idea: What if, while the AI is answering, it could also: actively show the exact sources it’s using open and highlight the relevant quote/section live let you inspect the reasoning/evidence without leaving the chat maybe even let multiple models challenge each other before a final answer is shown Not asking whether current AI is “good enough.” I’m asking specifically about trust. Would something like that actually make you trust AI outputs more, or would you still manually verify anyway? submitted by /u/ProfessionalRude3664

Originally posted by u/ProfessionalRude3664 on r/ArtificialInteligence