Hi all, I’ve been thinking about a recurring challenge in AI: ensuring that the data AI systems see actually comes from real humans. Bots, fake accounts, or deepfakes can distort datasets and affect model accuracy. Some approaches involve methods that confirm human presence without exposing private information. This makes me wonder: • Could verifying human presence in datasets make AI models less biased or more robust? • How might AI-driven platforms or communities evolve if most accounts were confirmed as real humans? • Could you imagine AI models trained only on verified human input? How might that change system behavior? • Are there ethical or privacy concerns we should consider when verifying users at scale? I’d love to hear perspectives from anyone working in AI research or development. Are there alternative approaches that could solve these challenges more effectively? submitted by /u/vinewb
Originally posted by u/vinewb on r/ArtificialInteligence
