Hi guys, the FastVideo team here. Following up on our faster-than-realtime 5s video post , a lot of you pointed out that if you can generate faster than you can watch, you could theoretically have zero-latency streaming. We thought about that too. So, building on that backbone, we chained those 5s clips into a 30s scene and made it so you can live-edit whatever is in the video just by prompting. The base model we are working with (ltx-2) is tricky to prompt tho, so some parts of the video will be kind of janky. This is really just a prototype/PoC of how the intractability would feel like with faster-than-realtime generation speeds. With stronger OSS models to come, quality would only be better from now on. Anyways, check out the demo here to feel the speed for yourself, and for more details, read our blog: https://haoailab.com/blogs/dreamverse/ And yes, like in our 5s demo, this is running on a single B200 rn, we are still working hard on 5090 support, which will be open-sourced :) submitted by /u/techstacknerd
Originally posted by u/techstacknerd on r/ArtificialInteligence
