Skip to main content Stan Consulting LLC · Marketing Atlas · Position · The Three-Layer Google Ads Diagnostic

Stan Consulting · Marketing Atlas · Position · Diagnostic Methodology

The Three-Layer Google Ads Diagnostic.

The SC methodology for diagnosing a Google Ads account before any restructure. Layer 1: account integrity. Layer 2: campaign structure. Layer 3: bid-strategy alignment. In that order — not parallel.

01 Section 01 · The claim The claim.

Most Google Ads audits inspect the wrong layer first. Account integrity defects hide upstream of campaign-structure defects, which hide upstream of bid-strategy defects. Diagnose top-down or compound the misread.

The claim has two parts. The first part is empirical: in the accounts the firm has read, the defect with the largest downstream effect is almost always at Layer 1, and almost never the layer the operator or the prior agency thinks they are looking at. The second part is structural: the metrics each layer produces are downstream of the layer above. Reading Layer 3 metrics before Layer 1 has been audited gives the auditor a number that is real but uninterpretable.

The diagnostic is not a checklist. It is an order. Account integrity first, campaign structure second, bid-strategy alignment third. Run them out of order and the verdict is wrong on arrival, not because the work was sloppy but because the work was given input that could not be trusted.

02 Section 02 · The conventional view What most people believe.

The conventional Google Ads audit reads as a list of independent inspection points. CPC. CTR. ROAS. Quality Score. Search-terms report. Each metric gets a paragraph. The audit closes with a list of recommendations. The recommendations span all three layers without saying so. They look reasonable because each one is reasonable in isolation.

Inspection 01

CPC trend. Cost-per-click is read as a market signal. Rising CPCs are blamed on competitive pressure, then on platform pricing, then on bid strategy. The audit recommends tightening targets or shifting match types. This sounds like a Layer 3 fix.

Inspection 02

CTR by ad group. Click-through rate is read as a creative signal. Low CTR is blamed on copy or asset quality. The audit recommends new ad variants and DSA expansion. This sounds like a Layer 2 fix.

Inspection 03

ROAS by campaign. Return-on-ad-spend is read as a performance signal. Underperforming campaigns get paused or restructured. The audit recommends a structural reshuffle, sometimes a Performance Max consolidation. This sounds like a Layer 2 fix.

Inspection 04

Quality Score. Quality Score is read as a relevance signal. The audit recommends keyword pruning, landing-page work, ad-copy alignment. This sounds like a Layer 2 fix that overlaps with a Layer 1 issue, but the Layer 1 issue is rarely named.

Inspection 05

Search-terms report. The search-terms report is read as a hygiene signal. The audit recommends new negative keywords. The negatives go onto the campaign-level lists, not the account-level lists. This is the Layer 1 fix that gets done at Layer 2.

The conventional audit produces a list of true things that do not stack. Each recommendation is locally correct. None of them are read against the layer above.

03 Section 03 · Why the conventional view fails Why that belief fails.

The structural argument is simple. Google Ads is a layered system. Each layer optimizes against the signal handed to it by the layer above. Smart Bidding optimizes against the conversion goal it was given. The campaign optimizes against the negatives, audiences, and goals defined at the account level. The reporting layer reflects whatever the bidding produced.

If the conversion goal is corrupted — double-counting events, missing primary conversions, attribution model misaligned — everything downstream is optimizing against a wrong target. The campaign that "underperforms" might be the cleanest one. The bid strategy that "got more aggressive" might be doing exactly what it was told.

The conventional audit moves laterally across the layers, treating each metric as independent. It is not. The same defect at Layer 1 will produce different symptoms at Layer 2 and Layer 3 in the same account. Reading the symptoms instead of the cause gets you three recommendations that all chase the same source.

Three failure modes of the lateral audit are worth naming directly.

Failure mode one. Bid strategy optimizes against signal that was already corrupted at Layer 1. Smart Bidding does not know the conversion goal is double-counting. It learns. Its learning is the failure mode — faster convergence on the wrong target.

Failure mode two. Restructuring a campaign without fixing account-level negatives just relocates the waste. Pause one campaign, the impressions move to another. The waste is not in the campaign. The waste is in the negative-keyword list, and moving it across campaigns is theater.

Failure mode three. Attribution-model changes at Layer 1 invalidate every prior bid-strategy comparison. An account that switches from last-click to data-driven has a new conversion signal. Comparing pre-switch ROAS to post-switch ROAS is comparing different units, and it is the most common reason the audit recommendation reads "ROAS got worse" when nothing in the campaign actually changed.

The lateral audit is not wrong because it inspects the wrong things. It is wrong because it inspects the right things in the wrong order.

04 Section 04 · The SC position The SC position.

Read the account top-down, in three layers, in the order the layers depend on each other. Account integrity is read first because it produces the signal everything else optimizes against. Campaign structure is read second because it is the surface on which signal becomes spend. Bid-strategy alignment is read third because it is the layer with the least independent agency.

Each layer is named below with its scope, its inspection set, and the test that says it has been read.

L1

Account integrity

The set of account-level conditions that produce the signal every campaign and every bid optimizes against. Conversion goals, account-level negative-keyword lists, branded-search carve-out, attribution-model integrity, audience-signal hygiene. These are not campaign-level objects. They are upstream of every campaign.

  • Conversion goals · primary event, double-counting check, model alignment
  • Account-level negative lists · canonical source, no contradictions downstream
  • Branded-search carve-out · one campaign owns the brand query set
  • Attribution model · matches operator's revenue-recognition logic
  • Audience signals · one owner per Customer Match list, one per Similar Audience

Test it has been read: the auditor can name, in writing, every account-level rule that touches every campaign.

L2

Campaign structure

The set of campaign-level decisions that translate clean account-level signal into ad delivery. Naming convention, ad-group thematic coherence, geo / device / time targeting, match-type discipline. Structure here is the working surface. It does not produce signal. It distributes it.

  • Naming convention · one schema across the account, no operator drift
  • Ad-group thematic coherence · one intent per group, negatives baseline applied
  • Geo / device / time controls · at the campaign layer, portfolio overrides as exceptions
  • Match-type discipline · broad, phrase, exact used as the intent calls for, with negatives as the boundary

Test it has been read: a third-party operator can read the campaign list and predict, by name, what each campaign is supposed to do.

L3

Bid-strategy alignment

The set of bidding decisions that the platform makes against the cleaned signal. Smart Bidding strategy choice, target settings, learning-period management, portfolio-bidding decisions. Layer 3 has the least agency because it is constrained by what Layers 1 and 2 hand it.

  • Strategy choice · Target ROAS, Maximize Conversions, Manual CPC, fitted to the campaign's data volume
  • Target settings · tight enough to be meaningful, wide enough not to starve learning
  • Learning-period management · relearning windows after Layer 1 or 2 changes, daily monitoring during
  • Portfolio bidding · campaigns grouped only when they share signal and intent

Test it has been read: the auditor can predict, before any change is made, how Smart Bidding will respond to a Layer 1 or Layer 2 fix.

05 Section 05 · The mechanism The mechanism.

Below is the working spec. Each layer has four numbered diagnostic moves. The moves are read in order, completed in writing, and signed before the next layer is read. The whole diagnostic completes in under twenty hours of read time on a typical Series-A account, regardless of campaign count.

L1 Account integrity Read first · foundational

Read conversion-goal definitions before any metric

Confirm the primary conversion event is purchase, lead, or whatever the operator's revenue-recognition logic actually rewards — not add-to-cart and not a soft micro-conversion. Verify no double-counting across the conversion-goal grouping. Inspect the attribution model and confirm it matches the operator's revenue-recognition logic, not a default the platform set.

Audit the account-level negative-keyword list

Confirm the account-level negative list is the canonical source. Identify campaign-level and ad-group-level negatives that contradict it. Resolve all contradictions to the account level before reading any campaign-level metric. The contradiction itself is the defect.

Confirm the branded-search carve-out

Locate every campaign that touches branded queries. Confirm a single owner campaign exists. Eliminate any internal branded-search competition between campaigns. Internal CPC inflation is almost always a missing carve-out.

Inspect audience-signal hygiene

Map every Customer Match list, Similar Audience, and signal asset to its owning campaign. Identify duplicate assignments. Resolve to one owning campaign per signal. Duplicates corrupt both the bidding signal and the reporting attribution.

L2 Campaign structure Read second · structural

Read the campaign-name convention

Confirm a single canonical naming schema across all campaigns. If three operators left three schemas, the convention is broken even if every individual name is technically valid. Naming is the legibility layer; without it, every other Layer 2 read is slower and more wrong.

Audit ad-group thematic coherence

Confirm every ad group contains queries that share the same intent and the same negatives baseline. Mixed-intent ad groups are a Layer 2 defect that surfaces as low Quality Score and uneven CTR. Splitting them is structural work, not a creative refresh.

Inspect geo, device, and time controls

Confirm geo, device, and time-of-day controls are at the campaign layer. Portfolio-level overrides should be exceptions, not the default. Controls scattered across portfolio strategies hide where each adjustment actually applies.

Audit match-type discipline

Confirm match-type usage matches the intent the campaign was scoped against. Drift toward broad match without the corresponding negatives baseline is a Layer 2 defect that compounds at Layer 3 because Smart Bidding sees broad-match data as more learning surface than it should.

L3 Bid-strategy alignment Read third · downstream

Confirm bid strategy matches account state

After Layers 1 and 2 are clean, confirm Smart Bidding strategy choice fits the campaign's data volume and goal. Target ROAS, Maximize Conversions, and Manual CPC are not interchangeable. A Target ROAS strategy on a low-volume campaign with corrupted Layer 1 signal is a near-guaranteed underperformer.

Read learning-period status

Identify any campaign in or near a learning period. Confirm targets are wide enough to allow learning and that frequent target changes are not preventing convergence. The most common Layer 3 defect is a campaign stuck in learning because someone has been adjusting the target weekly.

Audit portfolio-bidding decisions

Confirm portfolio strategies group campaigns that share signal and intent. Mismatched portfolios are a Layer 3 defect that hides as a Layer 2 problem — the campaigns look unrelated, but the portfolio is averaging across them as if they were one.

Set the relearning window

If Layers 1 and 2 have been changed, widen ROAS or CPA targets for the first fourteen days, then re-tighten as the new signal stabilizes. Skipping this step starves the system. The audit's final written instruction is almost always the relearning window because it is the most commonly skipped step in implementation.

06 Section 06 · Evidence and case links Evidence and case links.

The Position page is the doctrine. The links below are where the doctrine has been applied, taught, or summarized for a different audience. Each link is a test the doctrine has had to pass.

Primary case

The Account With 47 Campaigns and No Decision Logic

The composite case file the diagnostic was written against. Series-A DTC, $4.2M annualized, $220K monthly Google Ads spend, three agency handoffs over thirty months. The structural cause was Layer 1, not Layer 3.

Read the case file →

Operator-facing read

Google Ads Audit Checklist

The blog version. The same diagnostic, written for a marketing director who needs the read order in a usable form. Less doctrine, more checklist.

Read the checklist →

Framework page

The Three-Layer Google Ads Architecture

The public framework summary. The architecture rendered as a commercial page on the SC site, with the engagement bridge attached. The framework is the public face; this Position page is the defended doctrine.

Read the framework →

Engagement format

Conversion Second Opinion

The engagement format that runs this diagnostic at $999. Seventy-two-hour written verdict against the three-layer methodology. The commercial surface for the read.

Read the engagement →
07 Section 07 · Where it breaks Where it breaks.

Every methodology has assumptions. Naming the assumptions is part of defending the position. The diagnostic assumes baseline operator competence and trustworthy conversion-tracking data. When either assumption is false, the read needs a different first move.

01

Pre-conversion-tracking accounts

If the account has no working conversion tracking, Layer 1 has no signal to audit. The diagnostic does not start. The first move is conversion-tracking installation as a separate scope, then the diagnostic runs against the new signal once it has accumulated. Running the three-layer read on a tracking-blind account produces noise.

02

Performance Max-only accounts

Performance Max collapses Layers 2 and 3 into the platform's automated layer. The campaign structure inside a Performance Max campaign is mostly invisible to the operator, and the bid strategy is bundled with the campaign type. The three-layer read still works on Layer 1, but Layers 2 and 3 fold into a different audit shape that is documented separately.

03

Accounts with active fraud-traffic problems

If the account is being hit by click fraud or junk traffic at scale, Layer 1 signal cannot be trusted because the conversion data itself is contaminated. The first move is a traffic-quality audit using IP-level and engagement-pattern data, then the three-layer diagnostic runs once the contamination is bounded. Reading the three layers on a fraud-poisoned account misattributes everything.

04

Accounts under thirty days old

The diagnostic depends on a meaningful learning history. An account under thirty days has not yet produced enough Layer 3 learning data for a Smart Bidding read. The Layer 1 and Layer 2 read can run; the Layer 3 read returns a "not yet" verdict by design.

08 Section 08 · What it costs to apply What it costs to apply.

The diagnostic runs as the Conversion Second Opinion for operators who want the read on its own. It runs as the entry to a Sprint or Consulting tier for operators ready to install. The methodology is the same in either format. The deliverable shape and the engagement length are different.

Diagnostic only

Conversion Second Opinion

$99972-hour verdict

A written diagnostic verdict against the three-layer methodology. Read across all three layers. Named structural cause. Recommended install order. No restructure, no implementation. The read.

See the engagement →

Diagnostic plus install

Revenue Sprint or Consulting tier

Engagement-scopedread first, scope second

The diagnostic runs first as the scoping artifact. The Sprint or Consulting engagement runs the install across the layers in order. Pricing is set against the install scope after the read is complete; the read is the input that makes the price honest.

See the engagement formats →

Five Cents · Stan's note

Five Cents

Why I built this in three layers and not in five or in one. The honest answer is that three is the smallest number that holds the actual structure of the platform without forcing a fake stack. There is no useful Layer 0. There is no useful Layer 4. The platform really does have account-level conditions, campaign-level decisions, and an automated bidding layer, and they really do depend on each other in that order.

The reason most audits invert the order is not technical. It is contractual. Layer 3 is the most visible to the operator and the most defensible to the CFO, because Smart Bidding has a name and a number attached to it. Layer 1 is invisible until somebody opens the conversion-goal grouping and reads what is inside. Auditors who depend on the visible layer for their fee structure are not going to start at the invisible layer. The diagnostic exists to invert that incentive.

What I keep seeing: the operator who reads this Position page recognizes their own account in the second paragraph. The recognition is the deliverable. The seventy-two-hour verdict is just the document that proves the recognition was right.

Stan Tscherenkow · Marketing Atlas · 2026-05-07
10 Section 10 · Related Atlas entries Related Atlas entries.

The five Reference pages in the Google Ads Waste cluster, the case file the diagnostic was written against, and the hub. The graph below is the cluster map.

If you read this and recognized your account

Run the diagnostic. Then decide.

The Conversion Second Opinion runs this diagnostic against your account in seventy-two hours. A written verdict, the structural cause named, the install order set in the order Layers 1, 2, and 3 require. If the verdict says install, the engagement formats are scoped against the read. If the verdict says hold, you keep the read and act on it yourself.