Original Reddit post

I’ve been experimenting with different ways to handle context in LLM apps, and I realized that using RAG for everything is not always the best approach. RAG is great when you need document retrieval, repo search, or knowledge base style systems, but it starts to feel heavy when you’re building agent workflows, long sessions, or multi-step tools. Here are 3 repos worth checking if you’re working in this space. memvid Interesting project that acts like a memory layer for AI systems. Instead of always relying on embeddings + vector DB, it stores memory entries and retrieves context more like agent state. Feels more natural for:

  • agents
  • long conversations
  • multi-step workflows
  • tool usage history

llama_index Probably the easiest way to build RAG pipelines right now. Good for:

  • chat with docs
  • repo search
  • knowledge base
  • indexing files Most RAG projects I see use this.

continue Open-source coding assistant similar to Cursor / Copilot. Interesting to see how they combine:

  • search
  • indexing
  • context selection
  • memory Shows that modern tools don’t use pure RAG, but a mix of indexing + retrieval + state. more … My takeaway so far: RAG → great for knowledge Memory → better for agents Hybrid → what most real tools use Curious what others are using for agent memory these days. submitted by /u/Mysterious-Form-3681

Originally posted by u/Mysterious-Form-3681 on r/ArtificialInteligence