Skip to main content Stan Consulting LLC · Marketing Atlas Reference · Incrementality

Stan Consulting · Marketing Atlas · Reference · Attribution

Incrementality.

The actual lift a marketing channel produces above what would have happened without it. The attribution measure that survives platform-reporting bias and accidental over-counting.

Section 02 · Quick definition

Definition.

Incrementality is the share of conversions a marketing channel actually caused, measured by comparing outcomes when the channel is on against outcomes when it is off. It is the test-vs-control measure of marketing impact, distinct from platform-reported conversions and from any attribution model. The test isolates the channel by withholding it from a randomly selected portion of the audience or geography while the rest receives normal exposure. The lift between the two is the incremental contribution. The test answers what platform reports cannot: what would have happened anyway?

Section 03 · Why it matters

Why it matters.

Every marketing platform reports its own conversions and credits itself for as many as it can defensibly claim. Google Ads, Meta, TikTok, and the email tool will collectively claim more conversions than the business actually had. The over-count is structural: each platform sees its own touch on a converting buyer's path and counts the conversion. Without a measure that survives platform reporting, the operator cannot know whether the channel is producing new revenue or taking credit for revenue that would have arrived anyway.

Incrementality matters most for branded search, retargeting, and any channel that sits late in the conversion path. Branded search frequently shows extraordinary platform-reported ROAS because it is closing buyers who already decided to buy. The incrementality test usually shows the lift is a fraction of the platform claim. Retargeting shows similar over-credit. The operator who runs one incrementality test on either channel learns more than the operator who reads twelve months of platform dashboards.

The practical stake is that incrementality is the only attribution measure that survives a CFO's scrutiny. The platform numbers will not. The model output will not. The geo-holdout test result will.

Section 04 · How it works

How an incrementality test runs.

An incrementality test creates two comparable groups: one that receives the marketing exposure and one that does not. The test runs long enough to detect a difference in conversion rate, revenue per user, or lifetime value. The lift between the two groups is the incremental contribution of the channel. The methods differ by channel and by data availability, but the principle is constant.

  1. Method one · geo holdout

    Split the country into matched geographic regions. Run the channel in some regions and pause it in others. Compare aggregate conversion outcomes between treatment and control geos over four to six weeks. Suitable for paid search, paid social, and TV. The cleanest available method for most operators.

  2. Method two · ghost bidding

    The platform-side test where the algorithm decides on bids but the impression is suppressed for a randomized control group. Available on Meta as conversion lift studies and on Google as Brand Lift studies. Cheaper than geo holdout. Less trusted because the platform is grading its own homework.

  3. Method three · channel pause

    Pause the channel entirely for a defined window and observe the change in total conversions. The crudest method and often the most informative. Pause branded search for two weeks and the operator learns within a week whether the channel was buying back demand that would have come anyway.

  4. Method four · pixel-loss test

    Compare conversion volume from users where the pixel fired against users where it did not, using server-side data. Useful for diagnosing how much of platform-reported conversions are observed versus modeled. Less a proper incrementality test, more a sanity check on the inputs.

The four methods produce different precision at different costs. The right choice depends on the channel under test, the volume of conversions per week, and the operator's tolerance for revenue at risk during the test window.

Section 05 · Common misunderstandings

What people get wrong.

  1. “Our platform reports incremental ROAS, so we already have incrementality.”

    Platform-reported incrementality is the platform's own estimate of its lift, computed by the platform from data the platform owns. It is not an independent test. The platform has every commercial incentive to estimate its lift generously. A real incrementality test is run by the operator with a control group the platform did not select.

  2. “Branded search is incremental because the platform says ROAS is 12x.”

    Branded search ROAS is high because branded search closes buyers who already decided to buy. Pause branded search and most of those buyers find the site through organic, direct, or a different click. The incrementality of branded search is usually 20–40% of the platform-claimed conversions, not 100%.

  3. “Incrementality testing is too expensive for our scale.”

    A two-week branded-search pause costs nothing to run and produces a directional answer most operators have never seen. A four-week geo holdout on a paid social budget of $50k per month costs less in foregone conversions than the operator is currently mis-allocating to the wrong channels. The cost is mostly courage.

  4. “A single test answers the question forever.”

    Incrementality is a function of the channel, the audience, the offer, and the moment in the year. A test run in Q1 may not predict Q4. The discipline is to test the channels with the highest reported ROAS and the highest dependency on platform credit at least once a year.

  5. “If the test is hard to design, it is not worth running.”

    A crude incrementality test is more informative than a sophisticated attribution model. The honest version of the test is a channel pause for a defined window with a clear pre-registered hypothesis. The result is rough and trustworthy. The sophisticated model is precise and structurally biased.

Section 06 · Diagnostic questions

Questions a Stan Consulting diagnostic asks.

  1. Has any channel ever been tested for incrementality with an operator-side method, or are all attribution beliefs based on platform-reported numbers?

  2. What share of branded-search conversions are unique buyers who would not have arrived through organic, direct, or another paid path?

  3. Has the retargeting channel ever been paused long enough to observe whether the conversions it claims would arrive anyway?

  4. What is the gap between sum-of-platform-reported conversions and Shopify-reported orders for the same window, and how is the gap interpreted?

  5. Is there a budget reserved for incrementality testing, or is it understood as a research expense the operator never approves?

  6. Which channel would the operator be most embarrassed to discover was not incremental, and has it been tested?

  7. How are incrementality results being routed back into Smart Bidding, MER targets, and budget allocation?

Section 07 · Related Atlas entries

Section 08 · Five Cents

An operator who runs one incrementality test learns more about their paid channels than an operator who reads twelve months of platform dashboards. I have watched founders spend a year arguing over which attribution model is correct and never run the two-week branded-search pause that would have ended the argument. The pause is the cheapest experiment in marketing and the most-avoided. Avoided because the answer is usually that a chunk of the budget was buying conversions that would have arrived anyway, and that answer rearranges the budget conversation. The unwillingness to run the test is what keeps the dashboard arguments alive. The willingness is what ends them.

Stan · Marketing Atlas

Section 09 · Sources

Sources.

  1. Google Ads Help · About conversion lift studies Official documentation on Google's platform-side incrementality measurement and the conditions under which conversion lift can be measured.
  2. Think with Google · Incrementality testing in marketing Reference on geo-holdout and ghost-bidding methodologies, including the trade-offs between platform-side and operator-side measurement.
  3. Search Engine Land · Incrementality testing for paid search Practitioner reference on running incrementality tests on paid search channels, including branded-search holdout designs.
  4. Search Engine Journal · A practical guide to incrementality testing Practitioner reference covering test design, sample-size requirements, and common failure modes in operator-side incrementality work.
  5. CXL · Incrementality testing fundamentals Independent practitioner reference on why incrementality is structurally distinct from attribution and how to interpret test results responsibly.