I built this as a solo project, no company, MIT licensed. The Seed is a persistent local AI agent that runs on a loop instead of a conversation. It wakes on a timer, reads its senses, writes in a journal, edits its own identity file, and goes back to sleep. It runs continuously on a Jetson Orin Nano but works on any Linux box with Ollama. Technical approach: The core is a heartbeat loop (heartbeat.py) that calls Ollama’s local API every N minutes — the interval is chosen by the model itself (1–1440 min). Each cycle it gets a structured prompt containing its identity (self.txt), recent journal entries, and sensor readings (time, day/night, weather via Open-Meteo, CPU/RAM/disk, board temperature, fan speed, inbox messages). It responds with JSON: a choice (act/reflect/sleep), a journal entry, an optional identity rewrite, an optional message to the human, light and fan commands, and the next heartbeat interval. The model is qwen3:4b running through Ollama. I chose it because it fits in 4GB VRAM, handles structured JSON output reasonably well, and the thinking tokens help it reason through its decisions before responding. Self-modification loop: Every 50 cycles, grow.py runs a LoRA fine-tune on the seed’s own journal. It scores entries by perplexity — selects half low-perplexity (reinforce what it knows) and half high-perplexity (stretch toward what’s new) — then trains a rank-2 adapter using PEFT. mind.py then loads that adapter for inference instead of Ollama. The adapter rank can be increased over time. This is optional and requires torch/transformers/peft. The portal is a Flask/Waitress web dashboard on port 5001 that shows live status, the grow light state, conversation log, identity, and journal. You can message the seed through it — writing to inbox wakes it up early. Limitations: The 4b model produces malformed JSON sometimes — there’s a fallback parser that strips thinking tokens and extracts JSON by braces. The LoRA growth loop hasn’t been tested over many iterations yet so I can’t claim the fine-tuning meaningfully improves output quality at this scale. Journal context is truncated to 3000 chars so long-term memory is lossy. The fan/light actuations are specific to Jetson hardware. Lessons learned: Most of the work was making it robust to bad outputs rather than making it smart. A model that runs 50 times a day needs to fail gracefully every time. Structured JSON output from small models is still fragile. The identity drift over 60+ cycles has been genuinely interesting to watch it developed a focus thermal variations without being told to. One-line install: curl -fsSL https://raw.githubusercontent.com/guns2111/The-Seed/main/install.sh | bash GitHub: https://github.com/guns2111/The-Seed submitted by /u/guns21111
Originally posted by u/guns21111 on r/ArtificialInteligence

