Original Reddit post

Hey I’ve been working on a problem in AI epistemic uncertainty and wanted to share the result in case it’s useful to anyone here. Problem: Neural networks confidently classify EVERYTHING… even data they’ve never seen. Feed them noise? “Cat, 92%” Corrupted image? “Dog, 87%” Solution: STLE (Set Theoretic Learning Environment) Fixes this with complementary fuzzy sets: μ_x (accessible) + μ_y (inaccessible) = 1 The Approach: μ_x: “How accessible is this data to my knowledge?” μ_y: “How inaccessible is this?” Constraint: μ_x + μ_y = 1 When the model sees training data → μ_x ≈ 0.9 When it sees unfamiliar data → μ_x ≈ 0.3 When it’s at the “learning frontier” → μ_x ≈ 0.5 Results: OOD Detection: AUROC 0.668 without OOD training data Complementarity: Exact (0.0 error) - mathematically guaranteed Test Accuracy: 81.5% on Two Moons dataset Active Learning: Identifies learning frontier (14.5% of test set) Visit Github repo for:

  • Minimal version: Pure NumPy (17KB, zero dependencies)
  • Full version: PyTorch implementation (18KB)
  • 5 validation experiments (all reproducible)
  • Visualization scripts
  • Complete documentation Visit substack to help research: https://strangehospital.substack.com/ submitted by /u/Strange_Hospital7878

Originally posted by u/Strange_Hospital7878 on r/ArtificialInteligence