Insights
Why Brands Are What AI Models Say They Are
We're seeing plenty of discussion about visibility, share of voice, and sentiment related to AI search responses.
But measuring any of this has its own set of challenges, so lets start with understanding what AI models say about brands now.
Query first, strategize second
Ask any LLM or AI engine about a brand and the responses will give you a working view of how the model understands it. But treat it as a picture, not a fact.
Responses shift depending on how a question is phrased, which model you're using, and when you're asking. Run the same prompt twice and you may not get the same answer.
So query broadly: use different framings, different intents, different levels of specificity. What comes back across all of those attempts is a more rounded combination of everything the model has ingested.
Not all gaps are worth closing
Once you have that overview, resist the urge to fix everything at once because not all gaps carry equal weight.
Start by asking which missing or muddled content is most likely to surface in the questions your audience is asking and prioritize those. Closing a gap nobody is prompting for is a low-return move, so focus where the query volume and the content vacuum overlap.
Where to start
Begin with evaluation content. When someone asks for the "pros and cons of [brand]" and you have nothing fully addressing that, the model is likely to pull from sources you don't control. Writing balanced, transparent comparison content gives it a credible first-party source instead.
Don't omit clear definitions of what a brand does and who they serve. If you can't summarise that in a few tight sentences, the model will struggle too.
Beyond that, think about what makes a brand worthy of a link or citation.
Original research, proprietary data, and canonical explainer pages get referenced by other sources, and that citation footprint helps get a brand surfaced in AI responses.
AI models need proof that you're trustworthy [E-E-A-T].
What they respond to is evidence, with claims that are corroborated, consistent, and trackable. The brands that AI engines confidently mention or cite are the ones that have built that credibility trail.
Say the same thing, everywhere
Consistency matters. If a brand is described differently across five pages, the model reflects that inconsistency back to users. Align your terminology, repeat your core associations deliberately, and make sure your structured data reinforces what your content says.
What your content can't control
Some of what shapes AI sentiment doesn't appear on a brand's site at all.
We know that review aggregators, forums, social media, and third-party coverage feed these models. If the model is citing a problematic source, that is a PR and outreach task, not a writing task.
For better results, get your content strategy and your communications team working from the same brief.
Related: Defensive SEO: How to protect your brand narrative in AI search
What's your answer?
| Which of these is the hardest part for you right now? |
|
|
|
|
Getting cited in ChatGPT
An analysis by AirOps found that 85% of ChatGPT's discovered sources never appear in the final answer.
The TL;DR from the AirOps study:
- Retrieval doesn’t guarantee visibility. 85% of the pages ChatGPT finds never make it into the final answer, so being discoverable isn’t enough on its own.
- Fan-out expands where citations can come from. A significant share of cited pages only show up through related follow-up queries, not the original search.
- Key opportunities sit outside traditional SEO data. 95% of ChatGPT’s fan-out queries have no measurable search volume, making them invisible to standard keyword tools.
Dig into the data and get helpful advice on how to improve your chance of a citation.
Related: Prompt research: The next layer of SEO and GEO strategy
AI Mode responses changing
According to SE Ranking, Google.com is now pointing more to organic search results than Google Business Profiles (GBPs).
Back in 2025, 97.9% of Google links in AI Mode led to GBPs, with no references to traditional search results. Today, that’s changing: only 36.1% still reference GBPs. In turn, 59% of Google citations show organic SERPs on the right-hand-side panel of AI Mode answers.
See more data on AI Mode responses from SE Ranking's tracking analysis.
Do you follow me on LinkedIn? I share regular tips and stories I don't have room for here. Come and join me.