Original Reddit post

Hey, one year and a half ago we’ve been working on a proactive AI assistant. Not just a chatbot, but something that could actually act on your behalf. It could reply to emails in your tone, book or move calendar events, tag and organize your inbox the way you would, even keep you updated on things happening in the world based on what you actually care about. The goal was simple: build something that feels like an extension of how you think. To make that work, we ran into a pretty fundamental problem. You can’t fake understanding. If the system doesn’t actually connect things over time, if it doesn’t build some kind of internal structure, everything starts to feel shallow very quickly. So we built what we started calling a “brain”. Something that could take messy data, extract meaning from it, connect concepts together, and keep that structure consistent over time. At first it was just there to support the assistant. But it kept getting deeper, and honestly more interesting than the assistant itself. About 7 months ago we made a call: we stopped building the assistant and went all-in on that layer. Then came the part that really confirmed it. When we showed the system to people, they didn’t really talk about the automation. They kept pointing at the same thing: “this actually understands what I mean” … “it really understands me before I would think of something” They were all reacting to the brain. So we leaned into it. That became BrainAPI. The idea behind it is simple in spirit. Instead of treating data as chunks and retrieving similar text, we process it more like a person would. We extract concepts, connect ideas, and build a structured graph of knowledge. So when you query it, you’re not just getting text back, you’re navigating something that has actual structure behind it. What surprised us is how many different things this kind of layer can power. Once you have a structured understanding of data, you can use it to drive recommendation systems (ecommerce, social), build search engines that don’t just keyword match, add real memory to chatbots, or make RAG setups a lot more reliable. We’ve also been experimenting with something we call “polarities”: instead of returning a single answer, you can explore a space of possible solutions around a problem, based on how things relate inside the graph. We’ve been using this quietly for months with a few B2B use cases, without really putting it out there. Now we’re opening it up. We put together a short video to explain it, and open sourced the core. You can run everything locally (we’ve tested it with Ollama and offline setups), or deploy “brains” on a managed cloud. It’s also extensible, there’s a plugin system so you can shape it around your own use case. The bigger reason we’re focusing on this is tied to what we’re trying to do at Lumen Labs (our startup). A lot of AI today is powerful, but it’s also kind of fragile. It retrieves, it generates, but it doesn’t really ground knowledge in a reliable way. And that’s where a lot of issues start, especially when accuracy actually matters. We’re trying to move toward something more structured, where systems have a kind of memory that’s closer to how humans organize knowledge. Not just to make things more useful, but also to reduce how easily things drift into incorrect or misleading outputs. Anyway, this is not really a launch post. More like sharing what the last year and a half turned into. Curious what people think. Links: repo: https://github.com/Lumen-Labs/brainapi2 site + video: https://brain-api.dev/ submitted by /u/shbong

Originally posted by u/shbong on r/ArtificialInteligence