Original Reddit post

https://reddit.com/link/1sj2d2u/video/ni7xzmck7oug1/player I’m the solo developer behind AskSary (asksary.com) - full disclosure, this is my own project. I thought I would try something different to other platforms when I built mine. I wanted something to visually look good and in some retrospect represent artificial intelligence. I haven’t come across another chat platform doing this. The interface also displays in 26 languages with RTL support and full UI flip mechanism too. Back to the wallpapers. The idea was. for people to set their mood. Usually people like to customize their desktop theme or like to change the appearance of their phone Home Screen etc so I thought what about their chat interface. End of the day people could spend hours doing research or chatting away and thats where my concept for the idea came from. It may not suit all I know and dont worry there is an option to turn it off and have the plain dark/light mode theme with no wallpaper at all but for those interested. This is what I’ve built. The UI has 30+ live animated wallpapers running as canvas elements behind the chat interface. I wanted to document the technical approach since I haven’t seen anyone else do this in a chat platform context. How it works: Each wallpaper is an HTML5 canvas element that sits in a fixed position behind the chat container. They’re toggled via CSS visibility rather than DOM insertion/removal to avoid re-initialising animation loops on every switch. Each animation has a paired start and stop function that manages its own requestAnimationFrame ID, resize listeners, and cleanup. The more complex ones use actual physics. The particle network has mouse repulsion – it calculates the vector between cursor position and each particle every frame and applies an inverse force proportional to proximity. The Cyber Orb uses manual 3D rotation matrices and perspective projection to render the gyroscopic rings without any WebGL dependency. For video wallpapers I generate the source clips using Kling and Veo via my own platform, compress them with Handbrake to under 2MB per clip, then loop them natively. The rainforest one is three 8-second clips stitched with iMovie crossfades to avoid a visible loop point. Limitations I ran into with first release: 4K video wallpapers killed mid-range Android devices. I had 16 video wallpapers ready to ship and had to comment them all out after testing on real hardware from my user base (heavy Middle East/South Asia traffic). The canvas animations scale perfectly to any screen size since they’re procedurally generated - a 65 inch TV renders identically to a phone. The wallpaper draws lines between nearby particles. The problem is every particle has to check every other particle to see if they’re close enough to connect. So if you have 100 particles that’s 10,000 checks every single frame. The more particles, the slower it gets exponentially. To keep it smooth I just calculate how many particles the screen can handle based on its size and never go above that. Stack: Next.js, Capacitor for iOS/Android/Mac/Vision Pro, Firebase, Vercel. No WebGL, no Three.js - pure canvas 2D API throughout. The Apple Vision Pro is the only thing I’ve not tested. Unfortunately Xcode doesn’t have a Vision Pro simulator but the functionality etc should work as if it was a Mac Desktop app which I have confirmed works. If anyone’s got a Vision Pro would love to know how this looks on there. Demo: asksary.com submitted by /u/Beneficial-Cow-7408

Originally posted by u/Beneficial-Cow-7408 on r/ArtificialInteligence