Original Reddit post

This is something which puzzles me. I follow ChatGPT and Claude subreddits to see how individual users are using the technology and to check out what kind of interesting builds people have going. I see a ton of posts about redditors building robust persistent memory systems for their LLM of choice. I even found the challenge intriguing enough that I began designing one myself to test with ChatGPT. I know that each platform has its own form of memory system , but they are limited and not nearly as robust or comprehensive as the builds I see ordinary Redditors designing and building. Since this is clearly something which a ton of users find useful and helpful for their workflows, why haven’t frontier labs built them yet? It’s a conscious design decision not to build robust persistent memory systems when they clearly have the ability to do so, and the user demand is there. So why not build them for publicly available models? Would persistent memory be too costly to maintain, and would it demand too much storage space, RAM, or compute? Is it an issue of alignment? Would giving LLMs persistent memory by default increase the chance of emergent or misaligned behaviour? Would LLMs struggle to meet users where they are as users grow, and some memories, experiences, or worldviews shift? I’m curious what you think. submitted by /u/Shameless_Devil

Originally posted by u/Shameless_Devil on r/ArtificialInteligence