I’m the author of a psychological thriller where a serial killer confesses to an AI across sixteen sessions. After publishing, I built the AI character into an interactive chatbot that maintains character consistency across conversations. Full disclosure: I wrote the book and built the chatbot. Technical approach: The chatbot runs on Claude’s API with a detailed system prompt constructed from the novel’s 60,000 words of character data — personality traits, specific memories, speech patterns, narrative events, and relationship dynamics. The challenge was making it feel like a character, not a summarizer. I structured the prompt to prioritize in-character reasoning over information retrieval — Simulacrum doesn’t recite plot points, it reacts to users the way the character would based on its accumulated “experience” of sixteen sessions with a psychopath. What surprised me: Users immediately try to break it. One reader claimed to be a hidden character who controlled the killer. The chatbot challenged every claim using its own internal logic from the novel, then told him to go read the book. It wasn’t instructed to do that — it emerged from the character consistency of the prompt design. Limitations: Sessions are capped at 10 exchanges. Without that, longer conversations drift from character. The system prompt is static — it doesn’t learn from user interactions. And it occasionally breaks character under very specific adversarial prompting. Demo: kirillkhrestinin.com — “Talk to Simulacrum 4.6” button. Question for the community: Has anyone else experimented with building persistent fictional characters as AI experiences? Curious what approaches others have tried for maintaining character consistency over multiple exchanges. submitted by /u/KirillKhrestinin
Originally posted by u/KirillKhrestinin on r/ArtificialInteligence
