Original Reddit post

Over the last ~2 months I went down a rabbit hole trying to understand something many marketers are starting to talk about: How often does a brand appear inside AI answers? Not Google rankings. Not traditional SEO. I mean answers inside systems like ChatGPT, Perplexity, Claude, and Gemini. So I experimented with several platforms that track this kind of thing, including: Peec AI Otterly Goodie AI LLMClicks AthenaHQ Profound Rankscale Knowatoa plus AI visibility experiments in Semrush and Nightwatch Not promoting any of these. I was just curious whether this whole “AI visibility” concept is real or mostly hype. Here’s what I learned. First Surprise: Most Platforms Measure the Same Core Signal After testing multiple dashboards, the underlying system is usually something like this: Send prompts into LLM systems Ask questions related to a niche or category Check which brands appear in responses Track mention frequency Compare results with competitors Then everything gets summarized into a visibility score or trend graph. Different platforms visualize it differently, but the basic idea is similar. Second Surprise: Prompt Wording Changes Everything This part shocked me. Example prompts I tested: “Best local SEO tools” “Top tools agencies use for GMB management” “Platforms for managing Google Business Profiles” Each version produced very different brand mentions. Sometimes a company appeared in one query but completely disappeared in another. So now I’m wondering: Are we measuring brand authority or just prompt phrasing alignment? Third Surprise: Models Disagree With Each Other The same prompt produced different results across models. Example patterns I noticed: ChatGPT → mentioned certain brands repeatedly Perplexity → cited sources and sometimes different companies Claude → often gave more generalized answers Gemini → sometimes returned completely different brand sets That makes tracking “rankings” inside AI responses very tricky. There isn’t a stable SERP like Google. What I Actually Found Useful Even though I’m skeptical about the hype, these platforms did help with a few things: • Seeing how clearly a brand is associated with a niche • Understanding competitor narrative positioning • Spotting weak messaging • Observing how different models describe a category It was interesting from a market perception perspective. What I Did NOT See Despite improvements in brand mentions inside AI answers, I did not see clear evidence of: immediate traffic spikes conversion changes Search Console impression jumps Maybe that will change in the future if AI assistants become major discovery channels. But right now the connection still feels indirect. The Bigger Question Are we trying to apply SEO-style ranking thinking to something fundamentally different? Search engines rank pages. Language models generate probabilistic answers. That might require completely different measurement frameworks. Curious What Others Are Seeing If anyone here has experimented with AI visibility tracking: Did you notice any real traffic impact? Are clients asking for AI visibility reports yet? Do you think this will become the next layer of SEO? Or are we still in the early experimentation phase? Would love to hear other real experiences. submitted by /u/Real-Assist1833

Originally posted by u/Real-Assist1833 on r/ArtificialInteligence