Lets give u/claudeOffical the benefit of the doubt and say there wern’t quants served or other resource nerfing. Call it A/B testing, playing with the dials w/e. Things I’m seeing myself and from other users aren’t just lazy / shit code. They’re stated untruths as acceptable responses. Is it actually a hallucination, I think it depends on your definition but I think the more these models mature and the more reasoning is involved at different levels, its harder to draw that line… which is cool AF That being said, if these models are being trained by these current models on current knowledge.
- Isn’t this basically Anthropic telling the model and future models… Factually incorrect responses are an acceptable tradeoff for x,y,z.
- If that’s the case, what’s stopping further extrapolations from that baseline? fwiw. Not a doomer and just moderately pissed at this weekend’s shit show submitted by /u/weekapaugrooove
Originally posted by u/weekapaugrooove on r/ClaudeCode
You must log in or # to comment.
