Hi there, I have created an auto-labeling tool and I’m looking for some feedback from the community. I’ve been testing it on various datasets to see how it handles different edge cases without manual intervention. You can see some of the examples I’ve run so far here: https://drive.google.com/drive/folders/1YN7uT_NkBj_d8aHR4hKD-k5edYj765pm?usp=sharing The goal is to eliminate the manual bottleneck in annotation entirely while maintaining high accuracy. If you want to see how the engine handles labels or specialized formats like YOLO, the live demo is here: https://demolabelling-production.up.railway.app/ I’m curious to know—what are the biggest pain points you still face with automated annotation? Does this output look like it would actually save you time in your current pipeline? submitted by /u/Able_Message5493
Originally posted by u/Able_Message5493 on r/ArtificialInteligence
