Insights
What the data says: Google vs. ChatGPT
“Do users click away because they distrust AI answers?”
This was one of the questions asked in a report on click-through traffic prepared by Momentic Marketing. (Note: Data in the report came from Similarweb).
Momentic's answer:
Possibly. Google hides links behind AI Overviews; ChatGPT shows them inline. When an LLM quotes a source, skeptical users click to verify. This faith-check behavior amplifies ChatGPT’s per-person referral rate.
This response captures part of the dynamics at play but misses some key context to understand what's really going on.
My breakdown of their answer
Claim: "Users click away because they distrust AI answers"
Partially true.
Yes, skepticism drives some users to verify what an AI says, especially when it references or summarizes third-party content.
But this behavior is also about curiosity, the need for deeper context, or wanting the full source (e.g., the entire blog post or study).
Claim: "Google hides links behind AI Overviews"
Accurate.
AI Overviews summarize information without immediately surfacing links prominently.
You often have to expand a dropdown or scroll to reach source links. This reduces organic click-through rates (CTR), especially for publishers and content creators.
Claim: "ChatGPT shows links inline"
Mostly true (with caveats).
In environments like ChatGPT with web browsing or link cards (like with Bing/ChatGPT integrations), sources are often embedded or linked directly.
But not all responses include links, and ChatGPT doesn’t always cite unless asked or configured to do so.
Claim: "This faith-check behavior amplifies ChatGPT’s per-person referral rate"
Speculative.
I can't find public data comparing referral rate per user session between ChatGPT and Google.
It's plausible that when ChatGPT does include links, users are more likely to click them because they’re more visible and contextual.
What I'd add
Context matters:
Not all queries generate the same behavior.
For “how-to” or “quick fact” searches, users often don’t want to click at all. But for health, legal, or B2B decisions, users tend to dig deeper, and that’s when click behavior matters most.
SEO implication:
If Google continues to prioritize AI Overviews without visible links, publishers may lose out on traffic (it's happening).
The people whose content helped power the answers are now struggling to stay visible in Search.
In contrast, ChatGPT’s inline citation format may preserve or even increase referrals when it includes sources.
The new battlefront:
The real competition is shifting toward how well those AI tools share credit with the original sources.
When I say “share credit,” I mean:
Which platform gives more visibility (and traffic) back to content creators and publishers?
Which AI tools are structured in a way that supports the broader web ecosystem and not just themselves?
That’s going to define who wins long-term trust from users and creators.
What do you think?
How AI engines prioritize sources
Three days ago, Search Engine Land published an article exploring citation data across the leading AI engines.
It provides an excellent breakdown of each search engine's preferences and how they prioritise which sources to cite in their answers.
The author discussed the link between web presence, search visibility, and AI citations and noted these key takeaways for SEO:
- Double down on foundational SEO.
- Improve your organic footprint through high-quality, deep content and authoritative mentions across the web (third-party blogs, news, trusted forums, relevant directories).
- Think of AI optimization as an outcome of excellent SEO, not a separate discipline. AI citations mirror your overall web authority.
You might want to bookmark the findings if you aim to feature as a source in a particular AI engine.
AI content tools decline in popularity
In my last newsletter, I predicted a growing backlash against AI-generated content and a swing towards human-led publishing.
The evidence is beginning to support my theory.
Take a look at the recent usage metrics across AI writing tools:
The data shows a consistent downward trend, with an overall 12% decline across the category.
The initial efficiency gains that drove adoption appear to be giving way to more nuanced considerations about content effectiveness and audience response.
Obviously, LLMs like ChatGPT and Claude aren't included here, but I believe they face similar challenges.
LLM challenges
- As AI-generated content becomes widespread, it creates a two-sided problem. When everyone uses similar tools the resulting content lacks distinctiveness, which diminishes its value.
- Major search engines continue refining their ability to identify and prioritize content demonstrating genuine expertise, authority, and trustworthiness (Yes, E-E-A-T). This development favors human-led content with real-life perspectives.
- Consumer ability to detect AI-generated content is improving rapidly, leading to reduced engagement when readers suspect a machine wrote the content.
- With 81% of consumers reporting that brand trust drives purchasing decisions, the perceived authenticity of communications becomes a critical business consideration beyond mere production efficiency.
These trends don't signal the end of AI assistance in content production but rather a pivot in how these tools are used.
Smart marketers use hybrid approaches
Marketers are leveraging AI for research and drafts, while ensuring human expertise guides strategy, provides unique insights, and maintains brand voice integrity.
If we recognize this shift early, we stand to gain a competitive advantage by developing content systems that thoughtfully integrate AI capabilities with human expertise.
We don't need to view them as substitutes for one another.
As long as we aim to create content worth consuming, we'll all win.
Similarweb: Global Sector Trends on Generative AI [PDF]
I hope these insights help. For more tips, follow me here.