Insights
How One Bad Review Caused Multiple Issues in AI Answers
In my last newsletter, I mentioned that AI sentiment isn't always shaped by what's on a brand's site.
Specifically:
"We know that review aggregators, forums, social media, and third-party coverage feed these models. If the model is citing a problematic source, that is a PR and outreach task, not a writing task."
Together, these act as credibility signals for LLMs and AI engines.
This means, when someone asks a question about a brand, much of what the AI model knows is based on what other people and websites say about it.
And, that's not always good.
Seer Interactive just published an article that demonstrates how true this is.
One bad review causes multiple issues
I'm sure you're familiar with Wil Reynolds and Nick Haigler from this 24-year-old agency.
Seer had one bad review to its name.
It was posted in 2018 by someone who clearly had an axe to grind and then copied it across multiple review sites.
The issue is, when someone asks a question about a brand, AI looks to balance the pros and cons.
If the negative signals are thin, it digs deeper until it finds something, and then it cites that thing as if it's a pattern.
The phrase "high account manager turnover" showed up in 67 branded prompt responses for Seer. They were all traced back to that one duplicated review that nobody was reading before AI started pulling from them.
Fixing negative brand sentiment isn't always a writing task
Seer's article states, "...a single blog post is not a durable fix. It's marketing “whack-a-mole.”
Seer's website content is fine and they have a good reputation. But, the problem lived entirely off their site, in places they weren't watching.
Their solution involved publishing hard retention data, getting fresh reviews, and building a page designed specifically to give AI something credible to cite.
Going forward, what others say about a brand may matter more than what they say about themselves.
What's one tactic you'd suggest to a client in this position?
Hit reply and give me your best tip.
Related threads to get you thinking
A growing number of people and brands are asking questions like this:
"Have you found anything that actually works to update how AI platforms describe your brand?" reddit thread in AI_SearchOptimization
And, running experiments:
"[I] tested what happens when LLMs pull brand info from negative reddit threads vs positive ones." reddit thread in AI_SearchOptimization
You're scaling disappointment
One of the better articles I've read in a while is Pedro Dias' You’re Not Scaling Content. You’re Scaling Disappointment.
His thoughts on the "qualitative wall" and the economics of content at scale are both a reminder and a warning.
I don't have space for that entire section, but here's a taste (edited for length):
"Five hundred AI-generated articles a month. Each one needs to be reviewed for accuracy...
Each one needs to be checked for originality—because if it reads like everything else in the index, it provides no added value; no competitive advantage. Each one needs editorial oversight to ensure it actually serves the audience you claim to serve.
If you’re doing all of that, the cost just moved—and possibly increased—while you convinced yourself you were being efficient. The “efficiency” of AI content generation evaporates the moment you apply the quality standards the content actually needs to meet."
Show this to anyone who suggests AI eliminates the need for great writers and editors.
The entire article is worth your attention.
Optimize for LLM retrieval systems
I found this perspective on "optimizing for LLMs" interesting.
9thCO says it's a myth. They argue that you’re not optimizing for the model itself, but for the retrieval system that feeds it.
If that's the case, what does it take to be considered for retrieval and citation?
My summary of 9thCO's four pillars—EACA—is simplistic, but you may find a couple of useful tips in the piece to share with colleagues or clients.
Eligibility
If crawlers can’t access, render, or parse your content, you’re invisible. That means no blocking bots, no messy JavaScript walls, and no hiding your best insights behind friction.
Authority
Retrieval systems lean toward trusted, expert-driven content. Clear authorship, depth, specificity, and credible mentions matter more than vague thought leadership.
Compressibility
AI systems summarize and embed content. If your page is bloated, unclear, or structurally chaotic, it’s harder to extract clean meaning. Tight structure and sharp language win.
Association
Be explicit about what you do and who you serve. Strong topical focus and clear problem-solution framing help systems connect your brand to the right queries.
The conclusion is we should build content that is accessible, credible, structured, and unmistakably positioned (but, you're already doing that, right?).
Check the article and tell me what stands out to you.
Do you follow me on LinkedIn? I share regular tips and stories I don't have room for here. Come and join me.