Hello, r/ArtificialInteligence I am in the process of making a LLM Unified file structure. This is a plain text, AI-RW context format. The human writes the header (#H H#), and AI does everything else. https://github.com/lmufs/lmufs Why did I do this? I don’t like jsonl. What does lmufs have? Notes for AI. ;; e.g. ;; this thing does that ;; Notes for developers that are ignored by AI. ;; // change this value from CompanyName to your company ;; Segmentation is 2 spaces, locked. ;; Classes= ;; [i] for integer. [i]12 ;; [f] for float. [f]12.34 ;; [s] for string. [s]“Hello” ;; [c] for char. [c]‘H’ ;; [b] for boolean. [b]true [b]false ;; … for connecting ;; Anything in [[]] is literal, rules don’t apply, treat it as raw. [[python script]] ;; Variables= ;; Variables start at $ ;; Variables can be set to any of the classes or a section. ;; $example= [s]“Hello” ;; Sections= ;; #H for header ;; #S for status ;; #M for memory ;; #Q for query ;; #D for graveyard ;; #E for environment values ;; #A for artifacts ;; Section ends reversed, H# ;; Permissions= ;; [R] for Read ;; [RW] for Read-Write ;; [C] for Must check before response ;; [A] for full control of a section ;; [e] for ENUM ;; | differentiates;; ;; : Starts function ;; = sets something. $var=3 ;; in function, ;; $func: ;; thing=something ;; other=other thing ;; end[$func] ;; $func contains thing and other. ;; $funcIDENTIFIER ;; Function is over via END[$func] ;; Structure is: ;; #INIT → #H → #S → #M → #Q → [AI generates $OUTPUT] → LOG append → write back ;; Placeholder … ;; Placeholder with type …, ;; $UPPERCASE structual variables ;; $lowercase runtime variables ;; append is a deferred action ;; IDENTIFIER in brackets — instance keys ($FACT[0], LOG[2]) AI Write Rules Never modify #H Update $USED in #S with actual token count after each response Append $fact instances to #M for new confirmed knowledge Append LOG[$N] after $OUTPUT is known; increment $N Append failures to #D (narrative) and #E (structured) If $MEMSTATE= [e]FULL → summarise or move oldest facts to #D If $MEMSTATE= [e]FRAGMENTED → deduplicate and clean #M facts Replace all …, placeholders before writing back; never leave them I asked claude to compare my format and jsonl, it said // For AI-to-AI context passing, LMUFS is meaningfully better than JSONL. The permission model, [C] checks, and literal blocks solve real problems JSONL ignores entirely. For logging, debugging, or tool interoperability, JSONL wins — not because it’s better designed, but because the ecosystem already exists. Honest overall: they’re not really competing. LMUFS is a session context format . JSONL is a data transport format . Adjacent problems, not the same problem. You could even embed JSONL inside a [[]] literal block in a .lmufs file if you needed structured log data inside a session. // It’s human readable session context format. submitted by /u/ShiftingUser175
Originally posted by u/ShiftingUser175 on r/ArtificialInteligence
