Skip to main content Stan Consulting LLC · Marketing Atlas · Case File · Google Ads · The Account With 47 Campaigns and No Decision Logic

Stan Consulting · Marketing Atlas · Case File · Google Ads Waste

The Account With 47 Campaigns and No Decision Logic.

case_type: composite
proof_level: composite_pattern
cluster: google-ads-waste
published: 2026-05-07
01 Section 01 · The setup The setup.

A Series-A direct-to-consumer brand on Shopify Plus. Around four million in annualized revenue. Two-hundred-twenty-thousand a month in Google Ads spend. Forty-seven active campaigns inherited from three different agency relationships across thirty months. The marketing director ninety days into the role. The CFO asking for a thirty-day decision.

That is the composite. The names change. The setup does not.

The brand had been live for four years. The first agency had run a twelve-campaign structure that mostly worked. The second agency had been hired to "scale" and added Performance Max plus a duplicate set of search campaigns segmented by audience signal. The third agency — the current one — had inherited that pile, layered another round of audience splits and remarketing variants on top, and rebranded the structure as a "full-funnel architecture." Twelve became twenty-eight became forty-seven. Nobody had ever shut anything off.

The marketing director was new. The CFO had finally noticed that quarterly ad spend had crept up nineteen percent year over year while revenue had grown eight. The board called it a leakage problem. Nobody on the operating side could name where the leak was coming from, because the structure had not been read end-to-end in three rounds of agency handoff.

This is the case file. The audit was scoped for forty-eight hours. The diagnostic verdict went back in writing in seventy-two.

Stage
Series A · DTC consumer brand
Annualized revenue
$4.2M
Google Ads spend
$220K monthly
Active campaigns
47
Account age
30 months under three agency operators
Reporting cadence
Weekly with monthly retrospective
Marketing director tenure
90 days
02 Section 02 · The visible problem The visible problem.

Three numbers were on the table when the audit started.

ROAS down eighteen percent month over month. CPCs on non-branded search up about thirty percent over the prior quarter. Branded-search impression share holding flat at the top, but with rising CPCs that did not match a competitor showing up in the auction insights report.

The CFO had a thirty-day decision in front of him. Two options had already been drafted by the agency. Cut Google Ads spend by forty percent and reallocate to Meta. Or pause the lowest-ROAS twelve campaigns and run a full restructure inside the existing agency relationship, billed as a project on top of the retainer.

Both options shared the same defect. They were responses to the surface metric. The eighteen-percent ROAS slide was real. It was also a symptom three layers deep, and neither option addressed any of the layers above it.

The audit was commissioned because the marketing director recognized that the agency's two options were both spending decisions and both were being made without a structural read of the account. She wanted a second opinion before the CFO's deadline.

03 Section 03 · The wrong explanation The wrong explanation.

The team had been told three things. Each one was almost-right. Each one was wrong because it skipped the layer that actually mattered.

Wrong reason 01

"Smart Bidding has gotten more aggressive." The agency had pointed at Google's bid strategies as the cause. The argument was that the bidding algorithms had shifted target behavior in the last quarter and that ROAS targets needed to be raised. This is the kind of explanation that sounds credible because it is partly true and entirely off-topic. Smart Bidding was optimizing against the conversion signal it had been given. The signal itself was not clean. Tightening the target on a corrupted signal does not produce a better outcome — it produces a smaller, faster wrong answer.

Wrong reason 02

"iOS 14 broke our attribution." This had become the universal alibi by the second year of the account. It did break some things. It did not break account-level negative-keyword logic, branded-search overlap, conversion-goal definition, or the search-terms report. The iOS 14 narrative was pointed at every metric that wobbled, and it carried just enough technical weight that nobody pushed past it. Two of the three structural defects in this account had nothing to do with mobile attribution.

Wrong reason 03

"Google has gotten worse." A platform-blaming reflex that resurfaces every eighteen months in every paid-media practice. There is sometimes a real platform shift behind it. Most of the time it is a rationalization for not reading the account. The account in front of us had not been read end-to-end since the second agency handoff. Eleven weeks of the search-terms report had never been opened. The story that "Google has gotten worse" was costing about thirty thousand a month in spend that was never going to recover, and it was easier to repeat than to take apart.

All three explanations had the same shape. They pointed outward. The structural cause was inward, and it had been there long enough to compound.

04 Section 04 · The structural cause The structural cause.

The account had forty-seven campaigns and no decision logic. That sentence is the whole verdict. Everything else in this section is just naming the parts.

"Decision logic" is the rule that says when this signal happens, this part of the account moves; when that signal happens, this other part shuts off. A clean account has decision logic at every layer. A messy one has decision logic at one layer and improvisation at the others. This account had improvisation at all three.

Six things were true at the same time when the audit began. None of them were independent. All of them compounded.

One. No naming convention. Campaign names had been written by three different operators in three different schemas. The first agency used market plus product type. The second used funnel stage plus audience. The third used campaign type plus a launch date. Reading the campaign list was reading three handwritings on the same letter.

Two. Negative-keyword lists fragmented across fourteen ad groups. There were three account-level lists, but the rules in them contradicted lists pinned at the campaign level for six campaigns and the ad-group level for fourteen more. Search queries that had been negated at one layer were still triggering ads at another. The total wasted impression count from this single defect was running at about nine percent of monthly spend.

Three. Smart Bidding running on conversion goals that double-counted add-to-cart and purchase. Both events were inside the conversion-goal grouping. Both were being counted toward the ROAS target. The bidding algorithm was paying more for clicks because it thought it was getting two conversions instead of one. The corrupted signal was upstream of every single bid the algorithm had made for nine months.

Four. Three campaigns were competing for branded search. Two were exact-match brand campaigns at different bid levels. One was a "brand-protection" campaign added by the third agency that overlapped with both. The branded-search auction had three internal bidders for the same query. The CPC inflation that the team had been blaming on a phantom competitor was self-inflicted.

Five. Six campaigns had overlapping audience signals. Customer Match lists were attached to four prospecting campaigns and two remarketing campaigns. The platform was deciding which campaign got the impression. Reporting was crediting each campaign as if it had won the audience cleanly. The signal was muddied for the operator and incoherent for the bidding algorithm.

Six. The search-terms report had not been opened in eleven weeks. The agency report did not include it. The internal team did not look at it. Eleven weeks of "what queries actually triggered our ads" had accumulated, and the answer had material consequences. Roughly fourteen percent of non-branded spend over that window had gone to queries that were either off-product or already negated at a different layer.

Six things, one shape. The account had been running for thirty months without anyone reading the layer above the one they were operating on.

05 Section 05 · The decomposition The decomposition.

This section is not the fix. The fix comes next. This section is the structural read — the pattern that explains why the account behaved the way it did.

The Three-Layer Google Ads Diagnostic is the SC method for taking a Google Ads account apart in the order it has to be read. The Position page documents the doctrine. The job here is to show the doctrine running against this specific account.

The diagnostic reads top-down. Account integrity first. Campaign structure second. Bid strategy third. Each layer's signal is downstream of the layer above it. Skip a layer or invert the order and the verdict is wrong on arrival. This account had defects at all three layers, and the defects compounded in the order they had been ignored.

L1 Account integrity Foundation defect

This is the layer the audit found first because it has to be read first. Conversion goals were double-counting. Account-level negatives were contradicting campaign-level negatives. The branded-search carve-out did not exist as a structural rule; it had been improvised by adding more campaigns. Audience signals were duplicated across the prospecting and remarketing structures.

Every metric below this layer was running on corrupted input. Reading any campaign-level number before fixing the conversion-goal definition would have given the team a false read.

  • Conversion goals fixed first · remove add-to-cart from the primary ROAS target
  • Account-level negative list rewritten as the canonical source of truth
  • Branded-search carve-out moved into a single brand campaign with one bid strategy
  • Customer Match lists deduplicated and assigned to one campaign each
L2 Campaign structure Compounding defect

With Layer 1 named, the campaign-level read was straightforward. Forty-seven campaigns collapsed into a target structure of fourteen. The naming convention was rebuilt from market plus product line plus match type. Ad groups were re-themated against the cleaned account-level negative list. Geo and device targeting were brought back inside the campaign instead of being scattered across portfolio bid strategies.

The reduction from forty-seven to fourteen was not a cost-cutting move. It was a signal-quality move. Smart Bidding at Layer 3 was about to learn against fourteen clean conversion paths instead of forty-seven contradictory ones.

  • Naming convention frozen as market · product line · match type
  • Forty-seven campaigns reduced to fourteen on the principle of one campaign per coherent intent set
  • Ad groups thematic-coherent against the new negative-keyword baseline
  • Geo and device controls brought into campaign settings, off the portfolio strategy
L3 Bid-strategy alignment Downstream defect

Layer 3 is where most agencies start an audit. It is the wrong place to start, and it was the wrong place to start here. The bid strategies were not the cause of the eighteen-percent ROAS slide. They were the layer that had been instructed to optimize against the corrupted signal coming out of Layers 1 and 2.

Once Layer 1 and Layer 2 had been re-set, the bid strategies had to relearn against the cleaned signal. That is not optional. A Smart Bidding strategy holds learned-state from prior conversion data. The audit recommended a fourteen-day relearning window with widened targets, then a re-tighten as the new signal stabilized.

  • Bid strategies re-pointed at the cleaned conversion goal
  • Target ROAS widened by twenty percent for the first fourteen days of relearning
  • Portfolio bid strategy retired in favor of campaign-level bidding for the new fourteen-campaign shape
  • Learning-period drift monitored daily for the first two weeks, weekly thereafter
06 Section 06 · The fix or better move The fix, in install order.

The audit's written verdict named the install order. Order matters. Restructuring campaigns before fixing the account-level signal would have produced a cleaner-looking account that was still optimizing against the same corrupted input. Re-pointing the bid strategy first would have stripped two weeks of learning data from a system that was about to be re-fed cleaner data anyway.

The audit drove into the Conversion Second Opinion engagement format — a written diagnostic verdict, scoped at $999, delivered in seventy-two hours — and from there into a thirty-day Revenue Sprint to install the fix in the order below. The Sprint engagement is the implementation surface; the Diagnostic is the read.

  1. Layer 1 · Account integrity, week one

    Conversion goals rebuilt with purchase as the primary, add-to-cart removed from the ROAS calculation. Account-level negative list rewritten as the canonical source. Branded-search carve-out collapsed into one campaign with one bid strategy. Customer Match lists deduplicated. The point of this week is to fix the signal that every bid in the account is going to optimize against.

  2. Layer 2 · Campaign structure, weeks two and three

    Forty-seven campaigns collapsed into fourteen. Naming convention frozen as market, product line, match type. Ad groups re-themated. Geo and device controls brought back inside the campaigns. The point of these two weeks is to make every campaign in the account interpretable to a human reader and unambiguous to the bidding algorithm.

  3. Layer 3 · Bid strategy, week four

    Bid strategies re-pointed at the cleaned conversion goal. ROAS targets widened by twenty percent for the relearning window. Portfolio bidding retired in favor of campaign-level bidding for the new structure. The point of this week is to let Smart Bidding relearn against the cleaned signal without prematurely tightening targets and starving the system.

  4. Reporting · week four onward

    Weekly read of the search-terms report installed as a non-negotiable. Negative-keyword updates pushed back to the account-level list, never to ad groups. Monthly retrospective reframed around the three layers rather than around the channel mix. The point of this week is to install the decision logic the account had never had.

  5. The CFO decision

    The original two options — cut spend forty percent or pause twelve campaigns — were both retired. The recommendation back to the CFO was to hold spend flat for thirty days, run the install, and review the layer-by-layer signal at day thirty before making any reallocation decision. Cutting spend on a corrupted signal would have moved the leakage. Cutting spend after the install would have been a real choice.

07 Section 07 · The lesson The lesson.

The compounding mechanism in this kind of account has a name. Each agency handoff inherits the prior structure as a working surface, not as a question. The first agency builds a clean twelve-campaign account. The second agency adds without subtracting because subtracting requires reading the prior agency's logic, and reading the prior agency's logic is unbilled work. The third agency does the same. By the third handoff the account has accumulated three sets of decision logic that were never reconciled, and the reconciliation cost has grown larger than any single agency was scoped to absorb.

The eighteen-percent ROAS slide was not a sudden event. It was the moment the cumulative signal corruption crossed the threshold of what Smart Bidding could compensate for. The threshold gets crossed quietly. The metric that crosses it first is usually ROAS, because ROAS is the metric most exposed to upstream signal quality. The account had been compounding toward this point for at least nine months — arguably longer, but nine months was the window in which the corrupted conversion goal had been live.

The lesson is that account hygiene is not a maintenance task. It is the precondition for any other claim about the account being true.

Five Cents · Stan's note

Five Cents

What I keep seeing in this pattern is that nobody on the operating side is wrong, exactly. The agency is doing the work it was scoped for. The marketing director is reading the report she was given. The CFO is asking the right question of the wrong layer. Each person is operating inside their own seam, and the account is failing in the gap between the seams.

The forty-seven-campaign account is not a marketing failure. It is a reading failure. Three handoffs in a row, nobody read the account end-to-end, because end-to-end reading was never the deliverable of any month's retainer. The structural defect was not in the platform. It was in the contract shape between operator and agency, which is a different argument I will make somewhere else.

What this case file is for: if your account looks like the setup, the action is not another agency switch. It is a top-down read by someone whose entire deliverable is the read.

Stan Tscherenkow · Marketing Atlas · 2026-05-07
09 Section 09 · Related Atlas entries Related Atlas entries.

Each link below points at a related Atlas page that handles a piece of the case file in more depth. Reference pages give the definition. The Position page gives the firm's defended doctrine. The hub gives the map.

If this is the pattern in your account

The next move is the read, not another agency.

If the case file maps to your account — multiple agency handoffs, campaign count creeping, ROAS sliding for reasons nobody can name — the engagement that runs this diagnostic is the Conversion Second Opinion. A written verdict against the three-layer methodology, scoped at $999, delivered in seventy-two hours. If the verdict says install, the Sprint engagement runs the install. If the verdict says hold, you keep the read.