Original Reddit post

Hi everyone! I’m a solo developer and I’ve spent the last few months building SnapShade . The main challenge I wanted to tackle was the “uncanny valley” effect in hair filters—specifically maintaining fine strand details, transparency, and realistic occlusion against the face and background. I’ve moved away from generic image-to-image models to a more specialized pipeline that respects hair physics and lighting conditions. I’m looking for feedback from fellow AI enthusiasts on a few points: How do you find the temporal consistency and texture blending in these results? Any suggestions for improving “root-to-tip” color gradient accuracy in latent space? It’s finally live on the App Store if you want to see the full implementation: https://apps.apple.com/us/app/snapshade-ai-hair-try-on/id6758586608 Would love to discuss the tech and the pipeline behind it! submitted by /u/crocodilebeets

Originally posted by u/crocodilebeets on r/ArtificialInteligence