I’ve been vibe coding iOS apps with Claude Code and started thinking about how the AI actually sees the screen, right now it’s just taking screenshots, which works but feels clunky/slow for testing. Or is there a way to minimize the delay so it feel more seamless when use? I tried Playwright MCP but it was too slow/inefficient and didn’t feel like the right fit. I did some research and I get that there are limitations here, but I’m curious if there’s a more direct approach, like accessibility APIs, screen capture streams, something else entirely. Non-technical/designer. submitted by /u/Character_Water6298
Originally posted by u/Character_Water6298 on r/ClaudeCode
You must log in or # to comment.
