Data-Driven Attribution
The default GA4 model. Learned credit, not measured cause. The model incrementality testing is meant to validate.
Read the entry →Stan Consulting · Marketing Atlas · Reference · Attribution
The actual lift a marketing channel produces above what would have happened without it. The attribution measure that survives platform-reporting bias and accidental over-counting.
Section 02 · Quick definition
Incrementality is the share of conversions a marketing channel actually caused, measured by comparing outcomes when the channel is on against outcomes when it is off. It is the test-vs-control measure of marketing impact, distinct from platform-reported conversions and from any attribution model. The test isolates the channel by withholding it from a randomly selected portion of the audience or geography while the rest receives normal exposure. The lift between the two is the incremental contribution. The test answers what platform reports cannot: what would have happened anyway?
Section 03 · Why it matters
Every marketing platform reports its own conversions and credits itself for as many as it can defensibly claim. Google Ads, Meta, TikTok, and the email tool will collectively claim more conversions than the business actually had. The over-count is structural: each platform sees its own touch on a converting buyer's path and counts the conversion. Without a measure that survives platform reporting, the operator cannot know whether the channel is producing new revenue or taking credit for revenue that would have arrived anyway.
Incrementality matters most for branded search, retargeting, and any channel that sits late in the conversion path. Branded search frequently shows extraordinary platform-reported ROAS because it is closing buyers who already decided to buy. The incrementality test usually shows the lift is a fraction of the platform claim. Retargeting shows similar over-credit. The operator who runs one incrementality test on either channel learns more than the operator who reads twelve months of platform dashboards.
The practical stake is that incrementality is the only attribution measure that survives a CFO's scrutiny. The platform numbers will not. The model output will not. The geo-holdout test result will.
Section 04 · How it works
An incrementality test creates two comparable groups: one that receives the marketing exposure and one that does not. The test runs long enough to detect a difference in conversion rate, revenue per user, or lifetime value. The lift between the two groups is the incremental contribution of the channel. The methods differ by channel and by data availability, but the principle is constant.
Split the country into matched geographic regions. Run the channel in some regions and pause it in others. Compare aggregate conversion outcomes between treatment and control geos over four to six weeks. Suitable for paid search, paid social, and TV. The cleanest available method for most operators.
The platform-side test where the algorithm decides on bids but the impression is suppressed for a randomized control group. Available on Meta as conversion lift studies and on Google as Brand Lift studies. Cheaper than geo holdout. Less trusted because the platform is grading its own homework.
Pause the channel entirely for a defined window and observe the change in total conversions. The crudest method and often the most informative. Pause branded search for two weeks and the operator learns within a week whether the channel was buying back demand that would have come anyway.
Compare conversion volume from users where the pixel fired against users where it did not, using server-side data. Useful for diagnosing how much of platform-reported conversions are observed versus modeled. Less a proper incrementality test, more a sanity check on the inputs.
The four methods produce different precision at different costs. The right choice depends on the channel under test, the volume of conversions per week, and the operator's tolerance for revenue at risk during the test window.
Section 05 · Common misunderstandings
“Our platform reports incremental ROAS, so we already have incrementality.”
Platform-reported incrementality is the platform's own estimate of its lift, computed by the platform from data the platform owns. It is not an independent test. The platform has every commercial incentive to estimate its lift generously. A real incrementality test is run by the operator with a control group the platform did not select.
“Branded search is incremental because the platform says ROAS is 12x.”
Branded search ROAS is high because branded search closes buyers who already decided to buy. Pause branded search and most of those buyers find the site through organic, direct, or a different click. The incrementality of branded search is usually 20–40% of the platform-claimed conversions, not 100%.
“Incrementality testing is too expensive for our scale.”
A two-week branded-search pause costs nothing to run and produces a directional answer most operators have never seen. A four-week geo holdout on a paid social budget of $50k per month costs less in foregone conversions than the operator is currently mis-allocating to the wrong channels. The cost is mostly courage.
“A single test answers the question forever.”
Incrementality is a function of the channel, the audience, the offer, and the moment in the year. A test run in Q1 may not predict Q4. The discipline is to test the channels with the highest reported ROAS and the highest dependency on platform credit at least once a year.
“If the test is hard to design, it is not worth running.”
A crude incrementality test is more informative than a sophisticated attribution model. The honest version of the test is a channel pause for a defined window with a clear pre-registered hypothesis. The result is rough and trustworthy. The sophisticated model is precise and structurally biased.
Section 06 · Diagnostic questions
Has any channel ever been tested for incrementality with an operator-side method, or are all attribution beliefs based on platform-reported numbers?
What share of branded-search conversions are unique buyers who would not have arrived through organic, direct, or another paid path?
Has the retargeting channel ever been paused long enough to observe whether the conversions it claims would arrive anyway?
What is the gap between sum-of-platform-reported conversions and Shopify-reported orders for the same window, and how is the gap interpreted?
Is there a budget reserved for incrementality testing, or is it understood as a research expense the operator never approves?
Which channel would the operator be most embarrassed to discover was not incremental, and has it been tested?
How are incrementality results being routed back into Smart Bidding, MER targets, and budget allocation?
Section 07 · Related Atlas entries
Section 08 · Five Cents
An operator who runs one incrementality test learns more about their paid channels than an operator who reads twelve months of platform dashboards. I have watched founders spend a year arguing over which attribution model is correct and never run the two-week branded-search pause that would have ended the argument. The pause is the cheapest experiment in marketing and the most-avoided. Avoided because the answer is usually that a chunk of the budget was buying conversions that would have arrived anyway, and that answer rearranges the budget conversation. The unwillingness to run the test is what keeps the dashboard arguments alive. The willingness is what ends them.
Stan · Marketing AtlasSection 09 · Sources