Quantum Data Science Logo

Insights

Incrementality vs Attribution: Experiments for Causal Impact

Correlation-based attribution models misallocate credit and often inflate the impact of tactics like retargeting. Incrementality testing—randomised test‑versus‑control experiments that ask whether conversions would have happened anyway and show how the approach reveals the true causal impact of marketing investments, guiding more effective budget decisions.

Mar 08, 2026

Two knowledge workers review a b test results. Quantum Data Science logo

Incrementality vs Attribution: Experiments for Causal Impact

Marketing dashboards promise clarity. Yet anyone who has sifted through platform reports knows the feeling of being inundated with claims: email and search both say they “drove” yesterday’s sale, while social insists it deserves the credit. When every channel boasts victory, the underlying question often goes unasked: Would this conversion have happened anyway?

That simple question sits at the heart of incrementality testing. Traditional attribution models slice and redistribute credit across touchpoints, but they rarely question whether the purchase was inevitable. As one practitioner notes, incrementality starts by asking whether a conversion would have occurred without the marketing at all[1]. Bringing that counterfactual into measurement reframes the purpose of analytics. Rather than assigning credit for an outcome, we begin to ask what caused it — and whether the investment changed the outcome.

Correlation isn’t causation

Correlation is seductive. When a chart shows that conversions rise after a campaign launch, it is tempting to declare success. But as the Statsig team explains, misinterpreting correlation as causation can lead us astray[2]. Seasonal spikes can lift multiple metrics simultaneously; confounders lurk everywhere, making weak connections look strong[3]. The classic example of ice‑cream sales and sunburns illustrates the point: both increase in summer, yet no one would claim that ice‑cream causes sunburns[4]. Marketing is full of similar traps. A retargeting ad may appear just before a purchase, but was it the cause, or was the customer already planning to buy?

Attribution models attempt to apportion credit within these noisy systems. Last‑click attribution assigns 100 % credit to the final touchpoint; other models distribute credit across the journey[5]. But they remain rule‑based guesses, not experiments. They often overvalue bottom‑of‑funnel channels like retargeting while undervaluing upper‑funnel awareness efforts[6]. They cannot distinguish between customers who would have purchased anyway and those who were persuaded by marketing[7]. The consequence is misallocated budgets and a false sense of certainty.

What is incrementality testing?

Incrementality testing reframes measurement by introducing a control group. Measured’s FAQ describes it as a systematic test‑versus‑control design that isolates the true causal impact of media on sales[8]. The methodology holds back media in a statistically comparable segment while continuing to run it elsewhere, controlling for seasonality, promotions and other external factors[8]. Any lift observed is directly attributable to the media being tested. Over time, results can be aggregated to guide budget allocation and identify scaling opportunities[9].

Unlike many correlation‑based approaches, incrementality testing does not rely on historical relationships or user‑level tracking. For example, media mix modelling (MMM) analyses week‑to‑week variations in spend and sales to estimate each channel’s relative contribution. It requires extensive historical data and often includes non‑media factors such as economic conditions and weather[10]. While useful for high‑level planning, MMM remains correlational; it benefits from being calibrated with experimental results to ensure causal credibility[11].

Multi‑touch attribution (MTA), by contrast, attempts to follow individual journeys and assign fractional credit to each touchpoint. Its granularity can be appealing, but privacy restrictions and data fragmentation often make its results inaccurate or even misleading[12]. In both cases, the models infer relationships rather than observe causal changes.

Incrementality testing stands apart because it creates a randomised experiment in which one group sees the marketing and another does not. Any difference in outcomes between the groups can therefore be attributed to the marketing intervention. This approach allows marketers to answer the counterfactual: what would have happened without the campaign?

Surprising truths revealed by experiments

When organisations move from attribution to incrementality, they often uncover uncomfortable truths. RightSideUp’s guide notes that many conversions would occur regardless of marketing; loyalty and habit drive customers back whether or not they see an ad[7]. Experiments make this visible. In traditional dashboards, retargeting campaigns often show astronomical return on ad spend, sometimes 10× or more. Incrementality tests frequently reveal that the true impact is far lower — perhaps 1.5× or even negative[13]. Why? Because retargeting focuses on people who were likely to convert anyway[14].

Conversely, channels that appear underwhelming in attribution can shine in experiments. Upper‑funnel prospecting campaigns or awareness initiatives may show mediocre performance in dashboards, yet deliver strong incremental lift when tested[15]. By measuring what actually changes behaviour rather than what simply coincides with it, incrementality testing challenges assumptions and redirects investment to where it truly moves the needle.

These insights extend beyond digital advertising. Holdout tests in email marketing can reveal how much of your subscriber base would have converted without the promotion. Geo‑testing for search or out‑of‑home advertising helps isolate the contribution of specific markets[16]. The lesson is consistent: experiments surface hidden signals and refute convenient narratives. They force marketing teams to confront the “credit‑claiming game,” where every platform takes credit for the same sale[17].

Beyond attribution: integrating experiments into decision systems

Recognising the value of experiments is only the first step. Organisations must also act on the results. Incrementality tests require planning, statistical rigour and cross‑functional collaboration. Tests run for weeks or months and cannot be run continuously on every tactic[18], so they work best when embedded within a broader measurement strategy. MMM can provide a macro‑level view of long‑term trends; experiments can calibrate those models and validate assumptions[11]. MTA, where feasible and privacy‑safe, can offer user‑level diagnostics. However, it does not establish causation and therefore is best complemented with experiments rather than treated as a replacement for them.

Implementing a test‑and‑learn culture also means confronting organisational inertia. Decision‑makers may need to shift budgets away from beloved channels when experiments show minimal incremental impact and invest in channels that tests prove are effective[13]. This can be uncomfortable; retargeting budgets may shrink while upper‑funnel investments grow. Leadership can frame these adjustments as learning opportunities rather than failures so that causal measurement becomes a decision engine rather than merely a reporting tool.

Conclusion: Towards a causal mindset

Attribution models have served as proxies for understanding marketing performance, but they are fundamentally guesses. Incrementality testing offers something different: evidence. By asking whether conversions would have happened anyway[1] and by isolating the true effect of each intervention[8], experiments transform measurement from a credit assignment exercise into a foundation for decision quality. They reveal that what looks best is not always what works best[19] and encourage marketers to invest where they can make a real difference. Causal measurement is not a silver bullet. It demands resources, patience and humility. But adopting an experimental mindset aligns measurement with

Quantum’s worldview: value arises when data informs insight, insight shapes decisions and decisions drive action. In an environment where correlation is easy to measure and causation is hard, incrementality testing guides us toward the truth. It moves the industry beyond dashboards and into a discipline where economic outcomes are improved through thoughtful, evidence‑based decisions.

[1] [4] [5] [6] [7] [13] [14] [15] [16] [17] [19] Marketing Incrementality Testing: A Complete Guide to Measuring What Matters https://www.rightsideup.com/blog/guide-to-marketing-incrementality-testing

[2] [3] Correlation Does Not Equal Causation in A/B Testing https://www.statsig.com/perspectives/ab-testing-correlation-causation-insights

[8] [9] [10] [11] [12] [18] What is Incrementality testing vs MMM vs MTA? https://www.measured.com/faq/what-are-the-pros-and-cons-of-incrementality-testing-versus-mmm-or-mta/

Recommended

Want to talk through your analytics decisions?

Share a bit about your context and goals. We’ll follow up with the right point of contact.

Contact us

Legal

Contact

Socials