Skip to main content Stan Consulting LLC · Marketing Atlas · Find-Hot-Topics Methodology

Marketing Atlas · Reference · Content Strategy

Find-Hot-Topics Methodology.

AI does not cite keyword-optimized pages. It cites pages that match the question a real buyer just typed. Finding that question is the work upstream of every page that earns citation.

Concept · reference page Revised 2026-05-15 Author Stan Tscherenkow

The numbers underneath

What this concept moves in the content strategy.

Step one: find what is hot in real buyer vocabulary
Step two: find how the buyer shapes the question
Step three: find where the buyer asks (which engine)

The shift this concept produces

Before and after the operator applies the discipline named here. Source: SC install benchmarks across categories, 2024-2025.

Before applying this concept
22% baseline
After applying this concept
78% lift

Section 01 · Quick definition

Definition.

In one read

Find-Hot-Topics Methodology is the working method behind AI citation. It replaces keyword research as the upstream input to content.

The structural read

Keyword research optimizes for what historical search volume looked like; the find-hot-topics method optimizes for what a buyer is asking inside ChatGPT, Claude, Perplexity, or Google AI Overviews right now, in their own words. The five steps are sequential, fast, and run on platforms most marketing teams already access. The output is a buyer-prompt list, not a keyword list, and that buyer-prompt list is what every Atlas entry, Pain page, and citation surface is built against.

Section 02 · Why it matters

Why keyword research fails for AI citation.

01

Origin.

Keyword research describes what a buyer typed into Google in the last 24 months. AI search engines do not weight that signal. They weight whether the content on the page matches the buyer's actual prompt shape, whether the entity is unambiguously identifiable, and whether the source has authority signals the model trusts. Keyword optimization is downstream of an outdated retrieval model.

02

Mechanic.

Buyers no longer type three-to-five-word queries. They type a sentence, a confession, a question shape. "My co-founder disagrees with me on almost everything now" is not a keyword string. It is a buyer-prompt. A page optimized to match it earns citation. A page optimized for "co-founder conflict" misses the prompt and gets filtered.

The load-bearing point

The practical stake: companies that built their content engine on keyword research are mismatched with how buyers ask in 2025. The keyword pipeline produces pages that rank classically and miss citation. The buyer-prompt pipeline produces pages that earn both, with citation as the new top-of-funnel.

Section 03 · How it runs

How the five steps run.

The five steps run in order, each one informing the next. The full pass for a single buyer profile takes a working week the first time and a working day each refresh. Output is a buyer-prompt list of 30-60 entries the company will build content against.

01

Step one . List the operator vocabulary for the category.

Forty to sixty operator-side phrases describing the work the buyer is hiring out. Not keyword-research output. Not category-vocabulary. Phrases that real operators use in real meetings. The list builds in one sitting if the practitioner already runs in the category.

02

Step two . Read 30 real buyer threads on Reddit, Nextdoor, and forums.

Filter the threads against the operator-vocabulary list. Mark which threads return the most thread depth, which buyer pains recur, which sub-niches have the most heat. Thirty threads minimum produces a working corpus. Three threads is not enough.

03

Step three . Score each topic by recurrence, anxiety intensity, and conversion proximity.

A topic that recurs in 20+ threads, carries panic vocabulary, and arrives close to a purchase decision (rather than research) wins the priority cut. Stan's working scoring rubric is recurrence (1-5) + anxiety (1-5) + conversion proximity (1-5); topics scoring 12+ enter the build queue.

04

Step four . Map each top topic to one Atlas concept and one Pain page.

Each top-scoring topic produces two page builds: the Pain page in buyer vocabulary (Attention) and the Atlas concept page in category vocabulary (Interest). Both ship together. Without the Atlas concept the Pain page has no depth to route into.

05

Step five . Measure citation lift inside 90 days.

Re-run the seed queries through ChatGPT, Claude, Perplexity at day 30, 60, 90. Track citation share. Topics where lift exceeds 3x by day 90 enter the long-tail expansion queue; topics where lift stays flat get a content audit and either rewrite or retire.

The shift this concept names

Find-Hot-Topics Methodology is the working method behind AI citation.

Before applying this concept

We can use our existing keyword list.

After applying this concept

Re-run the seed queries through ChatGPT, Claude, Perplexity at day 30, 60, 90. Track citation share. Topics where lift exceeds 3x by day 90 enter the long-tail expansion queue; topics where lift stays flat get a content audit and either rewrite or retire.

Section 04 · Common misunderstandings

Common misunderstandings.

Misunderstanding 01

We can use our existing keyword list.

The keyword list and the buyer-prompt list overlap by less than 20% in the audits I have run. The keyword list under-represents confession shapes, comparison shapes, and trigger-anchored prompts. Reusing it loses the citation work before it starts.

Misunderstanding 02

AI engines will cite whoever ranks highest.

Citation is decoupled from organic ranking on most commercial queries. Citation tracks entity clarity, schema cleanliness, claim extractability, and source authority. A page ranked first and not cited loses the citation slot to a page ranked third or eighth that matched the prompt shape better.

Misunderstanding 03

Reddit is not a serious source.

Reddit is the most reliable mass-volume source of real buyer-language threads in 2025. Founder confession threads on r/Entrepreneur, r/sales, r/marketing, and category-specific subs surface the prompts buyers actually type when they are alone with the question.

Misunderstanding 04

We need to do this once per quarter.

Once-per-quarter is too slow for the comparison and trigger steps. The buyer vocabulary shifts inside weeks for fast-moving categories (AI tools, ecommerce platforms, B2B SaaS). Monthly refresh is the working cadence; weekly during a launch window.

Section 05 · Diagnostic questions

Diagnostic questions.

When the marketing team plans content, does the input look like a keyword list or a buyer-prompt list?

01

When the marketing team plans content, does the input look like a keyword list or a buyer-prompt list?

02

How recently was a real Reddit or founder-forum thread read by the team in the last 30 days?

03

Can the team name the top 10 trigger moments when buyers in this category reach for AI search?

04

Does the team test prompts against ChatGPT, Claude, and Perplexity before publishing?

05

Are pages structured (schema, entity clarity, llms.txt) so AI engines can extract claims cleanly?

06

Is there a citation-surface map naming where buyers in this category get cited?

Stan's take . four chunks

01

AI citation is not a black box. It is a methodology, and the methodology starts with reading what buyers actually type. Most marketing teams skip step one and start at step five. They write content against a category-language brief, push it through schema, and wonder why citation does not arrive.

02

The work upstream is what compounds. A buyer-prompt list of 40 well-chosen prompts, refreshed monthly, becomes the spine of the content engine for 18 months. Every page is built against an entry on the list. Every Atlas concept maps to a prompt family. The compounding is in the alignment between what the buyer asks and what the company has structured for the AI to find.

03

The companies winning citation share in 2025 are running this methodology in some form, with or without naming it. The companies losing share are still running keyword pipelines and waiting for the algorithm to revert.

04

Find the question first. Build the page against the question, not the keyword. Then structure the page so the AI can extract it cleanly. That is the whole shape of the work.

Stan Tscherenkow · Principal · Stan Consulting LLC

Section 06 · Adjacent concepts

Related Atlas entries.