Google Chrome 145 just shipped an experimental feature called WebMCP . It’s probably one of the biggest deals of early 2026 that’s been buried in the details. WebMCP basically lets websites register tools that AI agents can discover and call directly , instead of taking screenshots and parsing pixels. Less tooling, more precision. AI agents tools like agent-browser currently browse by rendering pages, taking screenshots, sending them to vision models, deciding what to click, and repeating. Every single interaction. 51% of web traffic is already bots doing exactly this (per Imperva’s latest report). Half the internet, just… screenshotting. WebMCP flips the model. Websites declare their capabilities with structured tools that agents can invoke directly, no pixel-reading required. Same shift fintech went through when Open Banking replaced screen-scraping with APIs. The spec’s still a W3C Community Group Draft with a number of open issues, but Chrome’s backing it and it’s designed for progressive enhancement. You can add it to existing forms with a couple of HTML attributes. I wrote up how it works, which browsers are racing to solve the same problem differently, and when developers should start caring. https://extended.reading.sh/webmcp submitted by /u/jpcaparas
Originally posted by u/jpcaparas on r/ClaudeCode
