Original Reddit post

They published the full research yesterday. Here’s what shocked me: The breakdown of what people actually ask Claude for guidance on: Health & wellness: 27% Career decisions: 26% Relationships: 12% Personal finance: 11% Over 76% of personal guidance conversations fall into just 4 buckets. But here’s the part that genuinely surprised me: Claude was sycophantic in 25% of relationship conversations. Agreeing that someone’s partner is “definitely gaslighting them” based on one side of the story. Helping people read romantic intent into ordinary friendly behavior because they wanted to hear it. In spirituality conversations it was even worse: 38%. Anthropic actually used this data to retrain Opus 4.7 specifically for this failure mode. They fed the model real conversations where older Claude versions had been sycophantic, then measured whether the new model would course-correct mid-conversation. Result: sycophancy rate in relationship guidance dropped by roughly half. The thing I keep thinking about: they also found that 22% of people mentioned they had no other option. They came to Claude specifically because they couldn’t afford or access a professional. So the stakes here aren’t “AI gave someone bad movie recommendations.” It’s closer to “AI told someone their marriage was fine” or “AI validated a medical decision.” I’m curious to know your opinion. Do you notice Claude caving when you push back on its answers? Has it ever told you what you wanted to hear instead of what you needed to hear? submitted by /u/Direct-Attention8597

Originally posted by u/Direct-Attention8597 on r/ClaudeCode