I’ve been working on two open-source projects: Lycan — a small graph execution language with strategy nodes as a first-class primitive (multiple implementations of the same contract, runtime learns weights from outcome feedback). Compiles to a binary graph, executed by a Rust runtime. No LLM in the hot path. Syntra — a self-hosted Docker/API appliance that serves compiled Lycan capsules. Multi-tenant, shadow-mode-first, contextual learning per ontextKey , persistent filesystem store, audit/decision/feedback logs separated. Includes an MVP YAML authoring layer so you don’t have to write the underlying Lisp. The use case I care about: repeated decisions where the best option depends on context and the outcome arrives later. LLM model routing, retry/timeout policy, queue selection, threshold tuning, anything where you’d reach for a contextual bandit but don’t want to stand up a Python ML platform to do it. I’m dogfooding it against my own product (a public AI stock-debate panel with 30-day market-resolved outcomes, MoEFolio.ai ). The first surprise wasn’t from the runtime; it was that my contextKey schema was collapsing all sectors into unknown one because my sector lookup only resolved symbols from one of three input paths. The bandit was nominally 5-dimensional but effectively 2-dimensional, learning a cross-sector average that meant nothing. Fixing the data pipeline, not the algorithm, is most of the work in adaptive systems. Apache-2.0, very early, would love eyes from anyone who’s worked on bandits in production. github.com/SectorOPS/Lycan github.com/SectorOPS/Syntra submitted by /u/Covert-Agenda
Originally posted by u/Covert-Agenda on r/ArtificialInteligence
