Original Reddit post

I’m an Italian indie Android developer and I’ve been working on an on-device AI detection app. I wanted to share a real-world test on one of the most viral AI-generated clips right now, the Brad Pitt vs Tom Cruise fight made with Seedance 2.0. The app flagged it as 89% AI-generated in under a second, all running locally on the phone. How it works: it uses an optimized Vision Transformer (ViT) model converted to ONNX format for efficient on-device inference. No internet connection, no cloud processing, no data ever leaves the phone. The main feature is the Quick Tile, a button in Android’s notification shade that captures and analyzes whatever is on screen without leaving the app you’re in. So you can be scrolling Instagram Reels, tap the tile, and get a verdict in under a second. It’s the first app to implement AI detection through Android’s Quick Tile system. Other features include batch analysis of up to 50 images and more from gallery, and single image analysis. No account, no email, no subscription. To be transparent: these detection systems are not perfect. AI detection is an ongoing arms race with generative models, and the app can sometimes get it wrong. It should be used as a helpful indicator, not as absolute proof. Free on Play Store: https://play.google.com/store/apps/details?id=com.aidetector.app I’d love to hear your feedback, especially from anyone working in AI detection or computer vision. What approaches do you think will hold up as generative models keep improving? submitted by /u/No-Signal5542

Originally posted by u/No-Signal5542 on r/ArtificialInteligence