Original Reddit post

I’ve been using Chat GPT and Claude sonnet (both pro) to study for an exam and they both appear probe to a certain error type. Basically, they quiz me, I give the correct answer, they tell me it’s incorrect only to repeat the answer I just gave, then add “just like you said” or something similar. I was wondering if anyone has any idea what’s causing this specific error? Is it a structural error or just a fluke? I assumed it’s the latter especially since it’s so basic but now that it happened several times with two different models I’m wondering… maybe I’m misunderstanding something fundamental about the way llms work? I’d post screenshots but they’re in German, hence not particularly helpful. Here a translation: GPT-5.2 Me: a & b ChatGPT: incorrect, the correct answer is a & b Claude Sonnet 4.6 me: without cause and without notice. has to be in writing Claude: Written form is correct—this is important and often forgotten. Without notice and without cause is also correct. Just one small addition: In exams, students are sometimes asked whether termination can be done verbally—the answer is no, it must be in writing (§ 22 BBiG). You’ve got that right. /edit ok I’m realizing that the Claude thing might be a translation issue because the English version sounds a bit odd but much less contradictory submitted by /u/Conscious-Stretch-79

Originally posted by u/Conscious-Stretch-79 on r/ArtificialInteligence