Skip to main content Stan Consulting LLC · Marketing Atlas · Case File · Google Ads · The Conversion Goal That Counted Add-to-Carts as Sales

Stan Consulting · Marketing Atlas · Case File · Performance Max Failure

The Conversion Goal That Counted Add-to-Carts as Sales.

case_type: composite
proof_level: composite_pattern
cluster: performance-max-failure
published: 2026-05-07
01 Section 01 · The setup The setup.

A Shopify food and beverage operator. Three-and-a-half million annualized. Sixty-two thousand monthly Google Ads spend across one Performance Max campaign and three Search campaigns. Conversion-tracking inherited from Google Tag Manager via the default Shopify-Google integration. Nobody had read the conversion-goal grouping in twelve months.

That is the composite. The names change. The configuration does not.

The brand had been live for three years. The Google Ads account had been opened by an agency that had since been replaced. The replacement agency had been hired to "lean into PMax" because PMax was the platform's recommended consolidation. The conversion-tracking layer had been left untouched on the assumption that the prior agency had set it up correctly. The default GTM template had pushed both add_to_cart and purchase events into Google Ads as conversions. Both had been imported as primary conversion goals.

The operator had read the Google Ads dashboard for twelve months. The dashboard reported conversions. The dashboard reported revenue. The numbers looked good. Reported ROAS held at 3.8x for most of the window. The board liked the number. The agency liked the number. Nobody opened the conversion-goal grouping to see which events were inside it.

The audit was commissioned during board prep. The CFO asked the marketing lead to reconcile reported ad-driven revenue with Shopify revenue. The two numbers did not agree. That is the moment this case file begins.

Stage
Shopify · food and beverage DTC
Annualized revenue
$3.4M
Google Ads spend
$62K monthly
Active campaigns
1 Performance Max · 3 Search
Reported ROAS
3.8x · 12-week trailing
Actual purchase ROAS
1.4x · same window
Conversion-goal grouping
Add-to-cart + purchase, both primary
Audit window
12 weeks of conversion data
02 Section 02 · The visible problem The visible problem.

Two numbers and a board meeting. That is what the operator brought to the audit.

Reported ROAS, twelve-week trailing, was 3.8x. The Google Ads dashboard said the operator was making three dollars and eighty cents in revenue for every dollar spent. The agency report said the same thing. Both reports were consistent with each other.

Shopify total revenue, same window, told a different story. Total revenue had grown four percent year over year. The Google Ads account had grown spend twenty-eight percent. If reported ROAS were correct, Shopify revenue should have moved more than four percent. The math did not close.

Actual purchase ROAS, when reconstructed from Shopify-side purchase data only, was 1.4x. Less than half of what the dashboard said. The dashboard had been right about the conversion count and wrong about what the conversions were.

The board meeting was in three weeks. The CFO asked the marketing lead to reconcile the two reads. The marketing lead asked Stan Consulting to read the account before the meeting.

The audit was scoped at this point. Seventy-two-hour written verdict. The brief was one sentence: tell us why the dashboard says we are profitable and the bank account says we are not.

03 Section 03 · The wrong explanation The wrong explanation.

The agency had given three explanations. Each one was almost-right and pointed away from the conversion-goal grouping.

Wrong reason 01

"Attribution windows differ between Google Ads and Shopify." The agency had pointed at last-click versus data-driven attribution as the source of the gap. There is sometimes a real difference here. There is not a difference of more than two-and-a-half-times. Attribution-window drift can produce a fifteen-percent gap; it cannot produce a 3.8x-versus-1.4x gap. The explanation borrowed real terminology to dress up an answer that did not address the size of the discrepancy.

Wrong reason 02

"Shopify under-reports because of multi-channel attribution." A reflex defense that gets reached for whenever Shopify revenue does not match an ad-platform read. Shopify reports purchase revenue at face value — what the customer paid, what the order shipped against. Multi-channel attribution belongs on the marketing-side ledger, not on the revenue-side ledger. The agency was using "multi-channel" to suggest the Shopify number was wrong rather than admitting the Google Ads number was inflated.

Wrong reason 03

"PMax learns over time; ROAS will normalize." The temporal alibi. The argument is that Smart Bidding takes weeks to converge on the right bid level and that the gap will close as the algorithm matures. This is the most convincing of the three because it is partly true on a short window. After twelve weeks the algorithm has converged. It has converged on the wrong target because it was trained against a corrupted conversion goal. More time does not fix a goal-definition defect; it deepens the misalignment.

All three explanations protected the dashboard. The defect was upstream of the dashboard, in the conversion-goal grouping that fed it.

04 Section 04 · The structural cause The structural cause.

Add-to-cart was a primary conversion goal. That sentence is the verdict. Smart Bidding was optimizing against a combined signal that counted intent as outcome.

The defect was a single line in the conversion-goal grouping. The default Shopify-Google integration via GTM had pushed three events to Google Ads as conversions: view_item, add_to_cart, and purchase. The agency had imported all three. View_item had been demoted to secondary. Add_to_cart and purchase had been left in the primary grouping. The bidding algorithms used the combined primary signal as the optimization target.

Five things were true at the same time. Each one was a consequence of the goal-grouping defect. None of them were independent.

One. The combined primary conversion goal counted add-to-carts and purchases as a single signal. A purchase carries an add-to-cart event with it; the same user produced two "conversions" for the algorithm. The reported conversion count was inflated by the add-to-cart-only completions where the user added to cart and abandoned. The algorithm could not tell the two apart because they shared the same primary-goal status.

Two. Smart Bidding learned that traffic with high add-to-cart propensity was high-value. It optimized toward that traffic. The audience segments the algorithm preferred were users who add things to carts and abandon — deal-shoppers, comparison-shoppers, list-builders. The campaign got cheaper "conversions" and lower-rate purchases simultaneously, which is the structural fingerprint of this defect.

Three. Reported revenue was double-counted at the conversion-value level. The default GTM tag had attached a value parameter to the add_to_cart event equal to the cart's monetary value at the moment of add. When the same user purchased, the purchase event also fired with its own value. The combined primary goal summed both. The dashboard read $3.80 in revenue per dollar spent because it was counting the same dollar twice for converters and once for cart-abandoners.

Four. Performance Max compounded the issue more than Search did. PMax leans heavily on whatever signal it is told to optimize against because the campaign type has fewer manual controls. Without a clean primary conversion, PMax optimized against the inflated combined goal more aggressively than the Search campaigns did, and PMax was the campaign with the most spend. The bigger the budget on PMax, the bigger the misallocation.

Five. The gap between reported ROAS and actual purchase ROAS widened the longer the campaign ran. After twelve weeks the algorithm had fully converged on the wrong target. The campaign was systematically buying clicks from users who would add to cart and not purchase, because that is what the bid signal said was a conversion. More learning made the misalignment more efficient.

Five things, one shape. The dashboard was right about what it was measuring. The dashboard was measuring something other than revenue.

05 Section 05 · The decomposition The decomposition.

The decomposition reads in three layers. The tracking-layer defect that produced the corrupted signal. The bidding-layer behaviour that locked it in. The reporting-layer distortion that made the corruption invisible to the operator.

L1 Tracking and goal definition Source defect

The conversion-goal grouping had add-to-cart and purchase both marked as primary. This is the structural defect. Add-to-cart is a useful secondary signal for some campaigns and an unsafe primary signal for almost all of them. The default GTM-Shopify integration pushed both events at parity, and the agency had imported them at parity. The algorithm read the combined goal as the truth.

The conversion-value attached to the add-to-cart event compounded the issue. Add-to-cart should not carry a revenue value at all in the Google Ads sense; the user has not transacted. Counting cart value as revenue value pushes the bidding algorithm toward audiences that build large carts they do not buy.

  • Primary conversion goal · add-to-cart + purchase · combined
  • Conversion value · add-to-cart event carrying cart-value parameter
  • GTM source · default Shopify-Google integration, never reviewed
  • Goal grouping · not opened in 12 months by either operator or agency
L2 Bidding-algorithm convergence Compounding defect

With the Layer-1 defect named, the bidding behaviour was deterministic. Smart Bidding optimized against the combined goal. The audiences that converted fastest under the combined definition were add-to-cart-and-abandon users. The algorithm bid up on those audiences. The campaign delivered more impressions to that segment, generated more add-to-cart events, and the dashboard read better. The campaign was working as configured. The configuration was wrong.

Performance Max amplified the misalignment more than Search did. PMax has fewer levers; it reads the conversion goal harder. The PMax campaign's serving mix shifted measurably toward add-to-cart-prone audiences over the twelve-week window, which is visible in retrospect as a drop in the campaign's purchase rate alongside a rise in its reported conversion count.

  • Smart Bidding strategy · Maximize Conversion Value, applied to the corrupted goal
  • PMax target ROAS · effectively meaningless because the value signal was wrong
  • Audience drift · toward cart-abandoner segments over 12 weeks
  • Purchase rate · declining inside reported "converters"
L3 Reporting distortion Read-layer defect

The Google Ads dashboard reported reported-revenue and conversion-count as if they were the operator's revenue and orders. The two systems had diverged at the Layer-1 definition; the dashboard could not flag the divergence because the dashboard does not know what the operator considers revenue. The reconciliation between Google Ads reported revenue and Shopify recognized revenue had never been done. The discrepancy was visible in the bank account before it was visible in the report.

Inside the agency report, conversion-count was prominent and conversion-quality was absent. There was no chart of purchase rate among reported converters. There was no purchase-only ROAS line. The reports the agency produced answered the question the agency had been scoped to answer, which was "is Google Ads producing reported conversions". The question the operator needed answered was "is Google Ads producing recognized purchase revenue", and that question was not in the deliverable.

  • Dashboard reconciliation with Shopify · never performed
  • Purchase-only ROAS · not reported in the agency deliverable
  • Purchase rate among reported converters · not charted
  • Discrepancy · only surfaced during board-meeting prep
06 Section 06 · The fix or better move The fix, in install order.

The audit's written verdict named the install order. Order matters. Re-pointing Smart Bidding before fixing the conversion goal would have stripped the campaign's learning history and re-trained against the same corrupted target.

The audit drove into the Conversion Second Opinion engagement format and from there into a thirty-day install. The conversion-goal cleanup below is what was installed.

  1. Day one · Demote add-to-cart to secondary

    Conversion-goal grouping rewritten. Purchase set as the only primary conversion. Add-to-cart demoted to secondary. View_item left at secondary. The primary conversion is now the event the operator considers a sale.

  2. Day one · Strip the cart-value parameter from add-to-cart

    The GTM tag firing add-to-cart updated to remove the value parameter, or to send a value of zero. Add-to-cart is not revenue. It does not carry revenue value into the bidding signal. The change is small in the GTM workspace and large in its consequences.

  3. Week one · Confirm the purchase event fires correctly

    Purchase event verified in GTM preview mode against a live test order. Conversion value confirmed against actual order subtotal, with shipping and tax handled per the operator's revenue-recognition policy. The primary signal is now clean and verified end to end.

  4. Weeks two and three · Smart Bidding relearning window

    Performance Max and Search campaigns enter a relearning window with widened ROAS targets for fourteen days. The campaigns are about to discover that the audiences they had been buying are not the audiences that produce purchases, and the bid economics will move accordingly. Daily monitoring through the relearning window. Weekly afterward.

  5. Week four · Reporting reconciliation installed

    Weekly chart added: Google Ads reported revenue, Shopify recognized revenue, on the same time axis. Purchase-only ROAS reported as a separate line from combined-goal ROAS. The reporting layer is now built so the next signal-quality defect would be visible inside the first reporting cycle, not at the next board meeting.

  6. Month two onward · Board read

    Board read shifts from "reported ROAS" to "purchase ROAS reconciled with Shopify revenue." The reported number had been protecting a story that was not in the bank account. The new read aligns the dashboard with the recognized revenue line. Subsequent decisions about budget, channel mix, and agency relationships are made against numbers that match the cash flow.

07 Section 07 · The lesson The lesson.

The conversion-goal grouping is the upstream signal everything in the account optimizes against. Smart Bidding does not know what a purchase means to the operator. Smart Bidding knows what is in the primary-goal bucket. If add-to-cart is in that bucket, the algorithm will buy add-to-cart traffic. The platform is not malicious here. It is doing exactly what the configuration told it to do.

The default GTM-Shopify integration pushes add-to-cart at parity with purchase because the integration is built for analytics-style measurement, not for bidding-target use. The two use cases have different signal-quality requirements. Treating an analytics signal as a bidding-target signal is the failure mode that produced this case file. The integration is not wrong; the import is.

The lesson is that conversion-goal hygiene is the precondition for any other claim about Google Ads performance being true. Reported ROAS is a function of the conversion-goal definition, not a measurement of revenue. The two are sometimes the same. They are sometimes very far apart. Reading the dashboard without reading the goal grouping that feeds it is reading a number with no idea what it counts.

Five Cents · Stan's note

Five Cents

The thing that bothers me most about this pattern is how easy the defect is to find and how rarely anyone looks. The conversion-goal grouping is two clicks away from the Google Ads dashboard. Anyone with operator access can open it. The grouping shows what events are inside the primary goal in plain text. There is no decoder ring. The defect would have been visible in any month of the twelve months it ran. Nobody opened the screen, because the dashboard above it was reporting numbers that looked good.

The 2.4x gap between reported ROAS and actual purchase ROAS in this case file is not extreme. I have seen the gap larger. I have seen accounts where the reported ROAS was twice the actual purchase ROAS for two years. The board read the dashboard. The agency read the dashboard. The CFO read the agency report. Nobody read the conversion-goal grouping. The whole apparatus had been built to make a single number the deliverable, and the single number was upstream of where the truth lived.

What the case file is for: if you have not opened the conversion-goal grouping in the last quarter, open it now. Read what events are inside the primary goal. Read what value parameters are attached. If add-to-cart is in there with a cart-value parameter, you have this case file. The Conversion Second Opinion is the formal version of the read. The informal version takes ten minutes and is free.

Stan Tscherenkow · Marketing Atlas · 2026-05-07
09 Section 09 · Related Atlas entries Related Atlas entries.

Each link below points at a related Atlas page that handles a piece of the case file in more depth. Reference pages give the definition. Position pages give the firm's defended doctrine. The hub gives the map.

If this is the pattern in your account

Open the conversion-goal grouping. Then call.

If the case file maps to your account — reported ROAS that does not match Shopify revenue, dashboard wins that do not show up in the bank account, conversion goals nobody has opened in months — the engagement that runs this diagnostic is the Conversion Second Opinion. A written verdict against the conversion-goal hygiene methodology, scoped at $999, delivered in seventy-two hours. If the verdict says install, the Sprint engagement runs the install. If the verdict says hold, you keep the read.