Original Reddit post

i’m in med school and started testing AI tools mostly because literature review is becoming a full-time job on top of everything else. the annoying part is that most tools look useful at first, but the second you ask for exact citations, guideline-level nuance, or anything remotely clinical, you realize you still have to verify everything yourself. chatgpt is great for explanations but sketchy with citations. perplexity is okay for quick links but often feels shallow. elicit and consensus are useful for papers, but still limited. scispace helps with dense papers. i’ve also been trying noah for biomedical questions and it feels more domain-specific so far, but i’m still testing it. honestly, the biggest issue is that everything still needs manual verification. what’s your actual AI stack for med school / medical research right now? submitted by /u/Savings-Ad342

Originally posted by u/Savings-Ad342 on r/ArtificialInteligence