Hey everyone, I’ve been experimenting with combining Nano Banana-style image reimagining with Gaussian splat tours . Gaussian splats are amazing for experiencing a real space as it exists. Nano Banana is great at reimagining how an image could look. So the question was: Why keep them apart? We built Spatial Studio (by realhorizons), we built a feature called Reframe AI that lets users move through a 3D splat tour, pick a camera view, and reimagine that exact perspective using AI. Some examples: An empty room can be furnished. A cafe can be changed into a seasonal theme. An under-construction property can be visualized closer to completion. A venue can preview different event setups. A real estate team can create multiple marketing concepts from one capture. We’re also experimenting with Spatial Props , where selected AI-generated objects or design elements can be brought back into the 3D experience instead of staying as a flat edited image. The flow is: Capture space → generate splat tour → pick a view → reimagine with AI → publish/share So the original captured splat stays intact, but AI-generated possibilities can be layered into the spatial tour. Still early, but I think this combo is interesting: real-world 3D capture + Gaussian splats + Nano Banana-style image generation = reimagined spatial experiences submitted by /u/Wrong-Yak-3931
Originally posted by u/Wrong-Yak-3931 on r/ArtificialInteligence
