Original Reddit post

Every few months there’s a new benchmark AI crushes. Bar exam. Coding interviews. Medical licensing. The list keeps shrinking. But there’s a specific cluster of abilities where humans still quietly dominate, and I don’t think they get enough attention: ∙ Detecting sarcasm from text alone, without tone of voice or facial cues ∙ Tracking multiple moving objects simultaneously in your peripheral vision ∙ Spotting a single broken cell in a mirrored grid pattern ∙ Reading depth and spatial relationships from biological visual cues ∙ Distinguishing impossible 3D objects by feel, not by algebraic verification These aren’t “humans are smarter” claims. AI is better than us at most things now. But in these specific domains, the architecture just doesn’t work the same way. A human child spots a geometrically impossible figure in under 2 seconds. Algorithmic detection on novel shapes takes 10-50x longer. So I started building games around exactly these abilities — targeting domains where the human advantage is documented and specific. Not to make people feel good about themselves. To actually measure and train these abilities and watch them improve over time. Genuinely curious — are there other cognitive domains you’d add to this list? Things where you’ve noticed AI consistently fall apart in ways that feel fundamental rather than just “not enough training data submitted by /u/Successful_Baker6666

Originally posted by u/Successful_Baker6666 on r/ArtificialInteligence