AI coding agents like Claude Code and Cursor are incredibly powerful. Lately, I’ve been using them not just for writing code, but for research and deeper reasoning tasks. That’s where things start to break down. When you use these agents for research: There’s no clear structure to their thinking You can’t version their reasoning You can’t easily trace where conclusions came from Each session feels ephemeral They’re smart, but chaotic. So I built an open-source layer called Cognetivy. It adds structure on top of coding agents. You can: Define explicit workflows for how the AI should reason Track each run as a structured execution Version reasoning steps Organize and persist data collections over time In practice, it turns tools like Claude Code and Cursor into managed deep research systems instead of glorified chat interfaces. It doesn’t replace the model. It doesn’t add another LLM. It adds state, traceability, and reproducibility. I built this because I personally felt the friction of using powerful agents in an unstructured way. Curious whether others feel the same gap when using AI agents for research-heavy workflows. If this resonates, I’d love to collaborate with people who want to push this idea forward. Repo: https://github.com/meitarbe/cognetivy submitted by /u/LeadingFarmer3923
Originally posted by u/LeadingFarmer3923 on r/ClaudeCode
