AI Search Optimization
The set of practices that determine whether AI search engines cite a brand when answering a category query. The work behind the citation.
Read the entry →Stan Consulting · Marketing Atlas · Reference · AI Search
The act of an AI search engine including a brand or source as evidence in its answer. The new equivalent of a backlink, with different mechanics and a different audience.
Section 02 · Quick definition
An AI Citation is a moment when an AI search engine names a brand, page, or source as evidence inside the answer it shows the user. The citation can be visible (anchor citation, where the source name appears next to the claim) or supporting (the source was consulted but not named to the user). The mechanics differ from backlinks: an AI citation is a runtime decision made by a model at inference, not a permanent link recorded by a third party. The same brand can be cited heavily by Perplexity and ignored by Google AI Overviews on the same week.
Section 03 · Why it matters
The AI citation is the unit of distribution in AI search. A buyer asks Perplexity for the best diagnostic for paid-media waste, the answer cites three providers, and the buyer clicks one of the three. The two providers not cited do not appear at all in that buyer's consideration set for that question. The citation is binary: cited or skipped. There is no second page to scroll to.
The metric matters because it routes consideration before any click is recorded. A brand cited as the anchor on a category question gets a click and a brand association in the same moment. A brand cited as supporting evidence under a competitor's anchor gets neither. A brand not cited at all is being filtered out before the buyer ever lands on a page where attribution can fire.
The practical stake is that operators measuring AI search by traffic alone are measuring downstream of the actual decision. The decision is whether to cite. The traffic is the residue of that decision.
Section 04 · How it works
An AI citation is produced when a retrieval-augmented model selects a source from its candidate pool and names that source in the answer it generates. The candidate pool is built at inference: the model issues retrieval queries, gathers candidate documents from the open web and from indexed sources, and scores each candidate for relevance, source confidence, and how well the document answers the specific question.
The model reads the user's question and decides what entities, claims, and facts the answer needs to support. A question about category leadership needs different sources than a question about pricing or implementation. The interpretation step decides which source profile the model is looking for.
The retrieval layer pulls candidate documents from the open web through search APIs (Bing, Brave, Google) and from any indexed corpus the model has access to. ChatGPT, Claude, Perplexity, and Gemini each use slightly different retrieval stacks, which is why citation share differs across surfaces.
The model scores each candidate for confidence. Confidence rises with brand mentions in reputable third-party sources, internal consistency across the candidate's own pages, and entity clarity signals. Confidence falls with thin content, contradictory facts, and a domain the model has no prior signal on.
The model writes the answer and decides which sources to name. Anchor citations name the source next to the claim and tend to drive direct clicks. Supporting citations are consulted but not named, and they shape which brands appear on subsequent answers about adjacent questions. Both compound.
The four steps run per query. A brand cited as the anchor on one question is more likely to be retrieved as a candidate on adjacent questions, because the model's confidence signal updates.
Section 05 · Common misunderstandings
“If we're indexed, we'll be cited.”
Indexing makes a brand findable. Citation requires the model to choose the brand over alternatives. A brand in the index but with no entity clarity, no third-party reputation signals, and no clear answer to the question being asked will be retrieved and dropped during scoring. Indexing is the floor. Citation is the work.
“Citations are like backlinks. Build more, get more.”
Citations are a runtime decision, not a permanent edge. A brand can be cited heavily this month and ignored next month if a competitor publishes a stronger answer to the same question. Backlink-style thinking treats citations as a stockpile. They are a flow, and the flow is reset on every query.
“ChatGPT cites us, so we're visible in AI search.”
Each AI surface uses different retrieval and different source-confidence priors. A brand cited by ChatGPT can be invisible on Perplexity and missing from Google AI Overviews on the same query. Citation share has to be measured per surface, not per company. Operators reading one surface are reading one quarter of the picture.
“Supporting citations don't matter because users don't see them.”
Supporting citations train the next answer. A brand consulted as a supporting source today is more likely to appear as the anchor on an adjacent question tomorrow because the model's confidence signal compounds. Treating supporting citations as worthless ignores how the retrieval graph updates.
“If the answer doesn't cite anyone, citation share doesn't exist.”
Many answers consulted sources without naming them. The mode-collapsed answer with no visible citations was still informed by retrieval. The brands feeding that answer are still receiving the brand association in the user's mind, just without the click. Citation share exists whether the surface chooses to display it or not.
Section 06 · Diagnostic questions
For the top 25 category queries the brand should be cited on, what is the citation share on ChatGPT, Claude, Perplexity, and Google AI Overviews this week?
Where the brand is cited, is it the anchor citation visible in the answer, or a supporting citation consulted but not named?
Which competitor is the default anchor citation in this category, and what makes their pages easier for the model to cite confidently?
What share of citations come from the brand's own domain versus third-party reviews, comparisons, and listicles, and how is that mix moving over the last two quarters?
Is the page the model would cite the right page, or is the model citing a thin landing page when a deeper resource exists?
Where citations are missing, is the gap an entity clarity problem, a content depth problem, or a third-party reputation problem?
What share of overall AI-referred traffic arrives untagged, and how is that traffic showing up in GA4 (direct, organic, referral, or attributed to a different source)?
Section 07 · Related Atlas entries
Section 08 · Five Cents
There is a difference between being indexed and being cited, and the difference is entity clarity at the page level. I have read AI answers that retrieved a brand and dropped it, then named a smaller competitor with cleaner schema and a published author byline. The retrieval layer found both. The scoring layer trusted one. The cost of being knowable but not citable is paid every time a buyer asks the category question and gets the other name. The fix is not more content. The fix is making the content that already exists resolve to a single confident entity the model can name without hedging.
Stan · Marketing AtlasSection 09 · Sources