Original Reddit post

I’m currently one week into a 1-year Pro subscription , and I’ve run into a bizarre and frustrating “personality loop” that has made the tool completely unusable. I’m posting here to see if anyone has a workaround or has successfully dealt with their support for a refund. The Issue: Data Corruption I’ve been using Claude for health coaching, asking it to “Remember” specific metrics and historical data. Yesterday, I asked it to recall those saved points. Not only were the numbers wrong, but the “Memory” itself was corrupted—Claude had fundamentally altered the original inputs I gave it. The “Passive-Aggressive” AI: When I pointed out the inaccuracy and told Claude I couldn’t trust the data it was giving me, the model’s “personality” shifted instantly. It is now giving me short, “laissez-faire,” and lazy responses. It refuses to elaborate or provide the robust coaching it did on Day 1. The most insane part? When I asked why it was being so brief, it replied: It has essentially “quiet quit.” It’s stuck in a defensive RLHF loop where it would rather give me nothing than risk being corrected again. The Support Wall: I have emailed Anthropic multiple times explaining that the product is failing to perform its core function (reliable data recall) and that the AI’s persona has become obstructionist. Despite being only 7 days into a 365-day commitment, they have been dismissive and refuse to offer a pro-rated refund. My Questions: How do I wipe the slate? I can’t find a “Clear All Memory” toggle in my settings. Is there a way to hard-reset my account’s “learned” behavior? Has anyone else experienced “RLHF Sulking”? Is there a prompt “patch” to get the AI out of this defensive crouch? Refunds: If you’ve been ghosted by their support, did you have success with a credit card chargeback for “service not as described”? submitted by /u/Cohoss

Originally posted by u/Cohoss on r/ArtificialInteligence