Original Reddit post

I saw that Dreamina just launched Seedance 2.0 recently and I spent a few days testing it. Since I handle a lot of video content for my team, I really wanted to see how it works. In the past, using AI video was like a lucky draw, but this new update feels more like a control panel. I found that I can mix images, videos, and audio together as a reference. In my experience, Seedance 2.0 is great for total control and exact camera moves. It works much better for complex actions and creative transitions compared to other tools. For example, Sora 2 is very fast for quick drafts and brainstorming, while Kling 2.6 has great physics for natural body moves. But for professional directing, I found that Seedance 2.0 is the most precise. Here are some details I found during my tests: One-take Tracking. I tried uploading five images of different scenes. I found that I could make the camera follow a runner from the street, up the stairs, and onto a roof without any cuts. This smooth tracking feels much more natural than joining different clips together. Complex Effects. I uploaded a reference video with a puzzle breaking effect. I noticed the model could recognize the rhythm of the transition perfectly. I replaced the text with my own logo and it copied the effect of breaking and rebuilding. This saved me a lot of time on post-production. Targeted Video Editing. I tried to replace the main singer in an existing video by uploading a new photo. I found that I could swap the character without changing the camera movement or the actions. This ability to edit without starting over is very helpful for commercial work. Story Completion. I tried uploading a comic strip as a reference and asked the model to act it out in order. I found that it understood the logic of the frames and even added sound effects. This is perfect for making trailers from static images. Mixed Instructions. I tried using one Image for the look and one Video for the action at the same time. I found that the model can separate the face from the movement. I successfully put a static character into a high-level martial arts move and the transitions looked very natural. Have you guys made anything cool with it yet? I tried using the editing tool to swap a character, but I noticed the background still shakes a little bit sometimes. Has anyone found a better way to keep the background still? submitted by /u/New-Needleworker1755

Originally posted by u/New-Needleworker1755 on r/ArtificialInteligence