Been heads down building graphify since April 5th. The idea: instead of stuffing your entire codebase into context every session, build a persistent knowledge graph once and query it with 71x fewer tokens. Works with Claude, Codex, Cursor, Gemini CLI, Aider, and most other assistants. 450k+ PyPI downloads, ~40k GitHub stars in 26 days. Supports code (25+ languages via tree-sitter), SQL schemas, PDFs, docs, images, video/audio (Whisper transcription), even R scripts and shell scripts. Curious what this sub is doing for persistent codebase context with GPT-4o / o3 – is context window stuffing still the main approach or has something better emerged? submitted by /u/captainkink07
Originally posted by u/captainkink07 on r/ArtificialInteligence
You must log in or # to comment.
