The idea was simple: instead of prompting AI blind, use Blender to control exactly what’s in the scene — object positions, camera angles, motion timing. Workflow: Built a basic scene in Blender (landscape, car, helicopter, road) — no complex materials, just layout Animated the cameras and objects with keyframes Extracted key frames from the animation Fed those frames into an AI image model to generate photorealistic versions of each shot Gave both the original 3D animation AND the AI images to Seedance 2 (Reference to Video) Seedance reconstructed the sequence with cinematic realism The Blender file basically acts as a director’s pre-vis — you control the composition, the AI handles the render. submitted by /u/waterarttrkgl
Originally posted by u/waterarttrkgl on r/ArtificialInteligence
