Page-one Google rankings
40
Commercial keywords ranking on page one of Google search
Stan Consulting · Marketing Atlas · Case File · AI Search Visibility
case_type: composite cluster: ai-search-visibility published: 2026-05-07
A B2B SaaS company in the marketing-tech category. Fourteen million in annual revenue. Six years old. Page-one Google rankings on roughly forty commercial keywords. Sixty-five thousand monthly organic visitors. The team queried ChatGPT, Claude, and Perplexity with twelve buyer questions in their category. Zero mentions across all three. Three smaller competitors appeared between nine and eleven times.
That is the composite. The names change. The shape does not.
The company had been on the same domain for six years and had a content team of three running a published cadence of eight long-form posts per month. The blog produced its share of category traffic. Sales-qualified leads from organic ran at roughly fifteen percent of pipeline. The marketing director ran SEO in-house with a freelance technical-SEO consultant who had been on retainer for eighteen months. Schema was installed; sitemap was clean; Core Web Vitals were green; canonicals were correct. By every standard SEO measure, the property was in good standing.
The marketing director had read the trade press on AI search and had run a casual query against ChatGPT a quarter earlier. The brand did not appear. She had filed it as a curiosity, on the theory that AI search was an experimental channel and the brand's strong organic position would carry over once the platforms matured. She came back to the question in Q2 when the head of product asked whether the team had a plan for the AI-search surface, having seen a competitor cited inside a Claude response his sales team had screenshotted.
The plan, when she went looking for it, did not exist. The team ran a controlled test: twelve buyer-intent queries in the category, run against ChatGPT, Claude, and Perplexity, on freshly opened sessions, with each query repeated three times to test for stability. The query list covered "best [tool category] for [use case]," "how to evaluate [tool category] for [team size]," and "what are the alternatives to [a competitor]." The competitor in the third query type was the same competitor the head of product had screenshotted.
The test produced thirty-six query runs across three platforms. The brand appeared in zero of them. Three competitors with smaller traffic profiles appeared in nine to eleven runs each. The brand ran fewer paid ads than two of the three competitors, had stronger backlinks than two of the three, had longer time-on-site than all three, and produced more long-form content than all three. None of that surfaced in the AI-search runs.
The audit was scoped at this point. Seventy-two-hour written verdict. The brief was one sentence: tell us why we are not in any of these answers, and tell us what is structurally different about the competitors who are.
Six numbers and a question. That is what the marketing director brought to the audit. The numbers are below. None of them is wrong. None of them adds up to an AI-search citation.
Page-one Google rankings
40
Commercial keywords ranking on page one of Google search
Monthly organic sessions
65K
Sessions delivered by Google over the trailing twelve months
Indexed pages
480
Distinct URLs indexed by Google across the property
AI-search query runs
36
12 queries · 3 platforms · 3 repeats each
Brand AI-search mentions
0
Mentions across ChatGPT, Claude, and Perplexity
Competitor AI-search mentions
9–11
Per competitor, across the same 36 runs, for three smaller rivals
The arithmetic is the punchline. The brand outranks two of the three competitors on Google. The brand has more traffic than all three. The brand has more indexed pages than two. The brand publishes more often than any of them. None of that translates. Across thirty-six AI-search runs, the brand surfaces zero times and three smaller competitors surface roughly thirty times combined. The Google ranking and the AI-search mention are not on the same axis.
The marketing director defended the strong organic position on the SEO-strength-carries-over argument. The head of product pushed back: the test had been run, the result was repeatable, and the screenshots showed the competitor inside a Claude response that named a category. The argument was the case file's seed.
Four explanations were on the table when the audit started. Each one was almost-right and pointed away from the layer that mattered.
"AI search is just SEO with extra steps; the rankings will follow the existing rankings." The continuity argument. The reasoning is that AI-search systems read the same web Google reads, so a strongly-ranked Google property will eventually surface inside the AI-search responses once the systems mature. The reasoning fails because AI-search systems do not rank pages; they synthesize answers from sources they can identify and trust as distinct entities. The page-rank-to-citation conversion is not direct. A page can rank on Google because Google trusts the link graph and the on-page relevance and the user-engagement signals on that page. An AI system can fail to cite the same page because the source is one of seven brand mentions on the same domain and the system cannot identify which mention represents the company an answer is being assembled about.
"The content is not optimized for AI yet; we need to add FAQ schema and rewrite for snippets." The on-page-content read. The reasoning is that AI systems prefer answer-shaped content with FAQ schema and clear question-and-answer structure, so the team should rewrite the long-form blog around those formats. The reasoning fails because answer-shaped content is necessary at the page level and not sufficient at the entity level. A property can have well-shaped FAQs and still be invisible to AI systems if the AI cannot identify the property as a distinct, citable entity. The team that ships FAQ rewrites and waits ships effort that will not produce visibility because the underlying entity is unresolved.
"The competitors are buying AI mentions through a vendor; we need to find the vendor." The conspiracy read. The reasoning is that the smaller competitors who showed up in the AI-search runs must be using a paid placement service of some kind, because no organic process produces that visibility advantage. The reasoning fails because no such service exists in any defensible commercial form, and the competitors who appeared in the responses appeared because the underlying systems read them as distinct entities with corroborating mentions in trusted sources. The "vendor" the team kept asking about was structural visibility produced by the systems doing what they do; the team had not done the structural work that would let the same systems read their property the same way.
"AI search is a fad; the rankings on Google are what matter." The dismissal read. The reasoning is that AI-search adoption is uneven, the platforms are in flux, the citations are unstable, and therefore the prudent move is to keep doing what is working on Google and let the AI-search question resolve itself. The reasoning fails because the citation graph compounds. The competitors appearing in the runs are not appearing because the AI platforms picked them this quarter; they are appearing because their entity signals have been intact for enough cycles that the systems trained on them. Waiting for the channel to stabilize is waiting through the window in which the leader set is being established. The dismissal was the most expensive of the four, because it came with the longest delay before the team began the structural work.
All four explanations let the team defer the structural work the audit was scoped to force. The defect was upstream of the content layer. None of the explanations went there.
Indexed but not readable as a distinct entity. The brand existed in Google's index four-hundred-eighty times. None of the four-hundred-eighty representations was anchored to a single, machine-resolvable identity an AI system could cite with confidence. The system saw a fog of pages, not a company.
The Q2 invisibility decomposed into four named defects. None of them was the cause on its own; the cause was the absence of a coherent entity signal across the property and across the open web.
Defect one. Wikipedia stub. The company had a Wikipedia entry that had been created during a press cycle three years earlier and never built out. The entry contained two sentences, one external link, and a citation to a TechCrunch piece announcing a Series-B that had since been overshadowed by a Series-C the article never reflected. The page was a stub by Wikipedia's own definition. AI systems weight Wikipedia heavily as a source of canonical entity definitions; a stub gives the system no anchor. The competitors who appeared in the runs all had multi-paragraph Wikipedia entries with founder bios, product timelines, and inline citations to ten-plus secondary sources.
Defect two. Schema markup with no cross-references. The property's schema markup was generic Article schema applied to blog posts and a single Organization block on the homepage. There was no @id cross-referencing across pages, no Person schema for the founder anchored to a stable identifier, no sameAs declarations linking the Organization to the LinkedIn company page, the X handle, the Crunchbase entry, or the Wikidata Q-number. Every page declared its Organization metadata in isolation. AI systems looking for the corroborating identity graph found four-hundred-eighty isolated declarations and no graph.
Defect three. No llms.txt, no ai.txt. The property published no llms.txt or ai.txt file. The robots.txt did not block AI crawlers, but it also did not provide the editorial framing that emerging AI conventions read. The competitors who appeared in the runs all had llms.txt files declaring the company name, the founder name, the canonical product description, and the canonical answer to the "what does the company do" question. The system had no editorial line on the brand from the brand itself; the system synthesized answers from whatever the open web volunteered, and the open web volunteered an inconsistent picture.
Defect four. Inconsistent entity disambiguation across four-hundred-eighty pages. The audit ran a name-and-handle inventory across the property. The official company name appeared with three spellings (the legal name, a stylized form, and a shortened form used in ad copy). The X handle appeared two ways (the current handle and a pre-rebrand handle still linked from forty-seven older pages). The LinkedIn company page was duplicated four times (the active page, two abandoned pages from the early years, and a regional page created during a Europe expansion). The founder name appeared on four bio pages with two spelling variants. AI systems reading the open web saw a brand that could not consistently agree on what to call itself; they declined to cite, because the system could not be confident any single citation referenced the same entity as the others.
Four defects, one missing structural artifact: the entity itself. Every page of content was honest. The brand had never been assembled into a single readable identity the AI systems could trust as the answer to a category question.
The decomposition reads in three layers. Indexability, the layer Google reads. Entity clarity, the layer AI reads. Source confidence, the layer the citation graph reads. The brand was strong at layer one, absent at layer two, and consequently invisible at layer three.
The traditional SEO layer. Pages are crawlable. Canonicals are correct. The sitemap is clean. Core Web Vitals are green. Backlinks are intact. Google can find the property, rank the property, and send the property traffic. This layer was strong on the account; the marketing director had been investing here for years and was getting the return she expected from the investment.
The Google layer is the prerequisite for the AI-search layer in the same sense the foundation of a building is the prerequisite for the roof. The team had built a foundation. The foundation alone does not produce a roof. The reads disagreed because the team had assumed the foundation was the whole structure.
The layer where AI systems try to identify the brand as a distinct, citable thing. Schema with cross-referenced @id. Wikidata Q-number. A non-stub Wikipedia entry. An llms.txt declaring the canonical brand name and description. A founder identity anchored to a stable identifier. Consistent name-and-handle disambiguation across the property and across the open web.
This is the layer where the brand was absent. Every individual artifact a serious AI system reads to confirm an entity was either missing, generic, or inconsistent. The reads disagreed at layer three because layer two had never been assembled. Most operators do not even know layer two exists as a distinct layer; they assume layer one is the whole AI-search story and ship more content into the existing content layer expecting the visibility to follow.
The layer where the AI system decides whether to cite the entity once it has identified it. Mentions in trusted secondary sources. Press citations with consistent name and date. Industry reports that name the brand alongside named peers. A pattern of citation across the trailing twelve to twenty-four months that says other trusted sources keep referring to this entity in the same way for the same reasons.
The brand had real press, but the press was scattered across four naming conventions and two stylized variants of the founder's name. The audit ran a manual citation count across forty named industry sources for the trailing twelve months. The brand was named twenty-eight times across those sources. The name appeared four different ways across the twenty-eight mentions. To an AI system trying to assemble citation confidence, twenty-eight inconsistently-named mentions read as roughly seven mentions of an unclear entity, not twenty-eight mentions of a known one.
The audit's written verdict named the install order. Order matters. Shipping a Wikipedia rewrite before the schema cross-references are in place orphans the entry from the rest of the entity graph. Shipping llms.txt before the on-property naming is consistent commits the property to one of the four naming variants without the supporting cleanup.
The audit drove into the Conversion Second Opinion engagement format and from there into a sixty-day install. The framework below is what was installed.
The decision is written and signed. One legal name. One stylized form, used only as a wordmark. One canonical founder name with one spelling. One X handle, with the historical handle redirected. One LinkedIn company page, with the orphaned pages closed and the regional page merged. The decisions take one document, two signatures, and a working session with the legal owner of each external surface. The document was the missing artifact.
Organization schema declared once on the homepage with a stable @id. Every other page references the Organization by @id rather than redeclaring it. Person schema for the founder declared once on the about page with a stable @id. sameAs declarations link the Organization to the LinkedIn company page, the X handle, the Crunchbase entry, the GitHub organization, and the YouTube channel. A Wikidata Q-number is reserved and added to sameAs once it exists. The 480 pages stop being 480 isolated declarations and become 480 references to a single graph the AI systems can resolve.
The llms.txt file declares the canonical company name, the canonical founder name, the one-sentence company description, the four-sentence company description, the category and sub-category, the year founded, the headquarters location, and the canonical answers to the three most-asked buyer questions in the category. The ai.txt file declares the policy on AI training and the canonical citation format. The files are published at the root of the property and linked from the head of every page through the alternate-link pattern. The system now has the brand's own editorial line on what to read first when assembling an answer.
A Wikidata entry is created or claimed with the canonical name, founder, founding year, headquarters, industry classification, official website, LinkedIn page, X handle, and inline references to four trusted secondary sources. The Wikidata Q-number is added to the property's schema sameAs and to the llms.txt. The Wikipedia stub is rebuilt to a full entry through the standard Wikipedia editing process, with neutral language, multi-paragraph structure, and inline citations to eight to twelve secondary sources. The neutrality requirement is real; the team scopes the rewrite to a contractor with prior Wikipedia-editing history.
The team contacts the twenty-eight trusted-source mentions from the trailing twelve months and asks the publishers to update older articles to the canonical name, founder name, and product name where the publisher's editorial policy permits. Roughly half of publishers comply within thirty days. The remaining half retain their original copy. New press going forward is briefed against a one-page brand sheet that gives reporters the canonical answers in the canonical phrasing. Naming consistency in trailing-twelve-month mentions improves from four variants to two within sixty days and to one within six months.
The original twelve-query test is re-run quarterly. The results are tracked against a single chart: brand mentions, competitor mentions, naming consistency in the AI responses, and citation source density. By month four, the brand is mentioned in three of thirty-six runs. By month seven, eleven. By month twelve, the brand is in the same band as the three reference competitors. The chart is the operating contract for the AI-search workstream and the artifact the head of product reviews each quarter.
A property can rank well on Google and be invisible to AI search. That is the lesson the marketing director kept turning over for a week after the install began. Google reads pages and ranks them. AI systems read entities and cite them. The two reads are not the same. A property optimized for the first does not automatically clear the bar on the second.
The audit's question was not really an SEO question. It was an entity question. Who is this company. The Google index can answer "what pages live at this domain" in four-hundred-eighty different ways and still leave the entity question unresolved. AI systems require a single resolvable identity before they will cite. Until the operator gives the systems an identity to resolve, the systems decline. Declining is not punishment; declining is the system doing what the system was built to do when the entity is ambiguous.
The lesson is that any operator running a content-and-SEO playbook in 2026 needs the entity-clarity workstream alongside the content workstream. The two are different; the second cannot be deferred to "once the channel matures." Once the channel matures, the leader set is set, and the cost of joining it later is several times the cost of joining it now. The default content-and-SEO configuration produces Google strength and AI invisibility. The default is the failure mode.
Five Cents · Stan's note
The part of this case file that I keep coming back to is the moment the marketing director realized the rankings were a finished argument. She had been telling herself for two years that the page-one positions were the proof the SEO program was working. They were the proof. The program was working. The program was not the same program AI search reads against, and the proof did not transfer.
What I want operators to take from this is that AI search is a different question with a different answer. It is not a meaner version of SEO. It is not SEO with extra schema. It is a structural question about whether the system can identify your company as a distinct entity it can cite with confidence. The answer to that question is built outside the content layer, mostly. It is built in the schema graph, the Wikidata entry, the Wikipedia rewrite, the llms.txt, and the cleanup of the inconsistent name-and-handle scatter that grew over six years of organic operation.
What this case file is for: if your team ranks on Google and does not appear in the AI-search runs you have probably already done casually, you have this case file. The Conversion Second Opinion delivers the verdict in seventy-two hours. The next move is the install order; the install order is what the engagement produces.
Each link below points at a related Atlas page that handles a piece of the case file in more depth. Reference pages give the definition. Position pages give the firm's defended doctrine. The hub gives the map.
If this is the pattern in your account
If the case file maps to your account — strong Google rankings, real organic traffic, zero AI-search mentions on a casual run — the engagement that runs this diagnostic is the Conversion Second Opinion. A written verdict against the four-defect entity-clarity framework, delivered in seventy-two hours. If the verdict says install, the Sprint engagement runs the entity-clarity workstream. If the verdict says hold, you keep the read.