Curious what the community thinks of this approach to AGI: RAVANA v2 uses “pressure-shaped developmental learning” — the AI doesn’t just optimize, it learns through regulated dissonance with identity coherence maintained by constitutional bounds. Key Components:
- Governor: Central regulation with 5 modes - Identity: Momentum-based growth, clamped by constitution - Resolution: Conflict resolution with partial credit - Adaptation: Learning from clamp events (Phase B) The Insight: Most AI safety = “here are rules, follow them” RAVANA v2 = “here’s how you learn to regulate yourself” Results: After 100K episodes: Dissonance 0.8→0.3, Identity 0.3→0.85 Paper: https://zenodo.org/records/18309746 What developmental approaches to AGI do you find most compelling? submitted by /u/ItxLikhith
Originally posted by u/ItxLikhith on r/ArtificialInteligence
You must log in or # to comment.
