Original Reddit post

Keeping identity about 90% consistent across different poses has been my main focus these past few weeks, and it’s pretty obvious that simple prompting isn’t enough anymore. I’ve been testing how different models deal with identity embeddings, and reference-based generation feels solid enough now for quick prototyping. Most of my tests have been with SD, but I’ve also been running Flux and Seedream through separate setups like Comfy, as well as all-in-one tools like writingmate. Any of those options that are possible do make it much easier to cycle through dozens of ai models and see which ones actually hold facial structure when switching styles, and, when it comes to all in one ai’s, it also helps to cook prompts for ai influencers. Then, training a custom LoRA takes me around 25 minutes with about 15 reference images, which is a big improvement from last year. That said, with something like Nano Banana Pro, I don’t really need a LoRA and I can lean on more detailed prompting instead… and (oddly enough!) it feels more stable even. Video is a different problem though. Testing a consistent character generator with temporal coherence is a whole other level. Most people still seem to anchor identity with static keyframes before animating. From what I’ve seen so far, I’m getting around 70% identity consistency in more complex, multi-character scenes, and I can more or less replicate that across most of the tools I’ve tried. submitted by /u/Working-Chemical-337

Originally posted by u/Working-Chemical-337 on r/ArtificialInteligence