I wanted a JARVIS and nothing out there did exactly what I wanted so I built one. It’s called CYBER. Voice activated, browser-based, Python backend. You say “Hey CYBER” and it wakes up, listens, and responds out loud. The voice cloning is done with XTTS v2 running locally. I fed it a JARVIS-style voice sample and now it responds in that voice. No API key, no cloud, just the model running on your machine. Vision mode lets you activate the camera and ask about what it sees. Point it at something, ask “what is this” or “read this text,” it analyzes the frame and responds. The system command execution is the part I’m most proud of. You describe what you want done in plain English. The LLM figures out if it’s a system task, writes the Python code, and the backend runs it. So you can say things like “show me what’s using port 8080” or “find everything I downloaded this week” and it just works without any hardcoded commands. Also does PDF analysis, YouTube video summarization from transcripts, image generation via Gemini, weather, maps, news, and system monitoring. Runs on your own machine. Discord: https://discord.gg/mdD5Za8TvZ submitted by /u/Mikeeeyy04
Originally posted by u/Mikeeeyy04 on r/ArtificialInteligence
