A technical breakdown of solving “Stage Fright.” disclosing first: i am the solo dev behind this project… the biggest lie in tech right now is that “AI builds everything in 5 minutes.” yes, i used Cursor and Claude to build the core logic of Vouchy ( https://vouchy.click/ ) quickly, but turning that code into a product that real humans can actually use took me months. why months?
- Building vs. Building for Users: it’s easy to prompt a video recorder. it’s hard to build a “trust architecture” that works for a non-tech customer who is terrified of the camera.
- The “Work to Eat” Factor: i’m building this from Ethiopia and i had to finish other client projects just “to eat” and stay afloat. balancing the “daily bread” with a solo SaaS build is the reality most “hustle” tweets don’t show. -The Limitations of Existing Tools: there are other testimonial tools, but they feel like cold databases. they don’t solve the “what do i say?” problem. i had to rebuild the recording flow 3 times to get the psychological friction low enough. -The Teleprompter Synchronization: The most difficult technical part was the browser-side recording. I implemented a custom hook using requestAnimationFrame to ensure the teleprompter scroll stays at a consistent 60fps while the MediaRecorder API is writing chunks to the buffer. Most browser-based recorders jitter if the main thread is busy; I had to move the scroll logic to a separate animation loop to keep it smooth for the user reading the script.
- The “AI Polish” Latency Benchmarks: For the text-enhancement feature, I’m using the Claude 3.5 Sonnet API via Edge Functions. The goal was to take raw customer input and refine it into professional copy. By using Edge Functions, I dropped the response latency from ~2.5s to under 1.1s, which is the threshold where users start to feel like the app is “lagging.”
- Auto-Display Architecture: To achieve “zero-code” updates for the user, I used Supabase Realtime. When a video is approved in the dashboard, it triggers a Postgres function that invalidates the widget’s CDN cache, allowing the new video to “Auto-Embed” on the customer’s site instantly. AI is a “co-pilot,” but the “pilot” still has to navigate the messy reality of user psychology. The biggest limitation we face right now is gaze-tracking (reading from a screen looks different than looking at a lens), and I’m looking for technical advice on post-processing gaze correction. i’m too close to launch now and i need this community to roast the product. i need the harsh comments, the bug reports, and the UI feedback before i go live. Demo Link: https://vouchy.click/ submitted by /u/alazar_tesema
Originally posted by u/alazar_tesema on r/ArtificialInteligence
You must log in or # to comment.
