Original Reddit post

Until last week, generating an image inside Claude meant Claude wrote you a prompt. Then you copied it. Opened another tab. Pasted it into Midjourney or wherever. Waited. Came back. Maybe iterated a few times (probably more). Chats were not understanding what’s happening and giving you poor prompts. Now Claude generates the image itself thanks to MCP. Inside the same chat. Same conversation. Same context. You ask. It plans. It renders. It hands you the file. There have been a few smaller MCP connectors launching this year - Pixa for Kling, Luma and Hailuo, HeyGen for avatars, Gemini Media for Google’s stack. All useful, all single-vendor, 2 or 3 models in scope. The new connector that landed this week is the first one I’ve used that runs 30-plus models behind one URL: Sora, Veo, Seedance, Kling, GPT Image 2, Nano Banana, Soul. The agent picks - you don’t. I tested it end-to-end on a 6-shot ad mock this week. Claude routed Soul for character continuity, Seedance for the motion-heavy beats, GPT Image 2 for the product shot. It picked the same models I would have picked manually 5 out of 6 times. The whole brief closed in roughly 50 minutes against ~2.5 hours of my old multi-tab process. That’s an agent by the working definition I care about - a system that takes a goal, plans across tools, and produces a finished artifact without me hand-holding each step. The keynotes have been promising this for two years and most “agent” demos still amount to a chat window calling APIs in the background. The second-order effect is what nobody is naming. The barrier between “agent that talks about creative work” and “agent that produces creative work” is gone. At least one step closer to automated systems running complex generations. A year from now I think we will look at “I’ll write the prompt and you paste it into another tool” the way we look at burning a CD to share a playlist - not because CDs were bad, but because the workflow stopped making sense. Worth flagging the rough edges too: Soul drifts after the 4th+ generation of the same character (had to retrain mid-session twice). Video gen is still 30-90 seconds per shot, no real speed gain over standalone tools. Per-generation pricing runs roughly 2-3x what you’d pay going direct to fal or Replicate, so for cost-optimized batch runs this is the wrong tool. Real tradeoffs. The same pattern is going to hit code, design, and music. Which domain do you think breaks first - where the chat-as-planner / execution-as-tool loop closes inside one session? submitted by /u/tommetzgerz756

Originally posted by u/tommetzgerz756 on r/ArtificialInteligence