Claude Opus 4.6 was released by Anthropic on February 5, 2026, just a week ago, with claims of a major upgrade featuring long context windows, improved agentic performance, and benchmark wins over GPT‑5.2 and Gemini 3 Pro. But honestly… after playing with it for a bit, I was left a bit underwhelmed. Yes, the benchmarks headline higher scores and a 1M token context window, but in real use it often feels slower, more verbose, and not that much smarter than before, especially for everyday productivity tasks. And with pricing still $5/1M input + $25/1M output tokens (often premium higher for long contexts), it gets very expensive fast. It makes me wonder, why should we pay so much more for only marginal practical improvement? Performance gains on paper don’t always translate to better productivity in real workflows, and the cost/value just doesn’t feel aligned for many use cases. Maybe I’m just skeptical from having used way cheaper or more efficient alternatives for similar tasks. I could see this being great in some edge scenarios, but as a daily AI assistant for the average developer/knowledge worker, the ROI feels thin. submitted by /u/RepulsivePurchase257
Originally posted by u/RepulsivePurchase257 on r/ArtificialInteligence
