I have been building a wavetable synth and a lot of the development process involved working with AI coding tools, including Claude, to iterate on architecture, DSP plumbing, UI and parameter systems. My tech stack (also accessible in the tool description): Audio: DSPAudioWorklet, Web Audio API Frontend: React 18, TypeScript, TailwindCSS Backend: Express 5, TypeScript Database: PostgreSQL, Drizzle ORM AI + Agents: Google Gemini API, Claude, Codex The result is a browser native wavetable synth with dual oscillators, modulation matrix, filter, effects and full manual control. On top of that there is a semantic layer where you can describe the sound you want and it maps to a structured parameter set. I am calling it SynthGPT . I have used Claude but also Codex and Gemini (used Codex for backend strengthening and Gemini for semantic understanding). The interesting part for me has been using AI coding tools not for quick demos but for building a real instrument with performance constraints and a fairly deep parameter surface that a user could leverage natural language to control it. It is free right now. I am also working on porting the engine into a standalone VST plugin so it can run inside DAWs. Try it here: https://synthgpt.cc/ and if you want a full walkthrough: https://www.youtube.com/watch?v=vU1Bjs-RSbQ Curious how others here are using Claude or similar tools to build serious creative software and not just your typical SaaS. submitted by /u/radutrandafir
Originally posted by u/radutrandafir on r/ClaudeCode
