Not every channel needs an experiment right now. The goal is a testing programme that builds a growing library of causal benchmarks — starting where the incrementality uncertainty is highest and the stakes of being wrong are largest.
Practical Guide - 7 min read - MASS Analytics
The most common question when building an incrementality measurement programme is: where do we start? The instinct is often to test the channels you’re most confident in, to validate existing beliefs. This is the wrong instinct. The most valuable incrementality tests are the ones that resolve genuine uncertainty, specifically uncertainty on channels where the stakes of being wrong are high.
The Four Prioritisation Signals
Start where the combination of incrementality uncertainty and financial consequence is greatest:
Channels with low spend variation
When a channel’s budget barely changes over the modelling period, MMM needs a complimentaru layer of incrementality testing to isolate its specific contribution.
New channels without history
MMM needs roughly two to three years of data to build reliable priors for a channel. For a new format or platform, those priors don’t exist yet. An experiment can give you the first causal benchmark before you scale spend.
Hard-to-measure media
TV, radio, OOH, and brand advertising drive purchases through pathways attribution can’t track: offline conversions, delayed decisions, cross-device paths. The true incremental effect is systematically underestimated.
Budget decisions under pressure
When finance asks for cuts, the channels that look weakest in the model are the first on the list. You can use incrementality evidence to verify what’s actually driving your business.
Three Levels of Calibration
Not all calibration is equal. There are three approaches, each with a different level of rigour:
Level 1
Qualitative comparison
Review incrementality results alongside MMM outputs, note where they agree and disagree, and use that to inform your thinking, without changing the model. Useful as a starting point. The weakest method, and a last resort.
Level 2
Model selection
Run multiple MMM specifications and select those that align better with incrementality evidence. Uses the experiment to steer modelling choices without directly integrating the causal estimates. Materially stronger than qualitative comparison alone.
Level 3 — Strongest
Full integration
Bring incrementality evidence directly into the model as Bayesian priors or coefficient constraints. The model is explicitly steered toward the experiment’s measured causal truth. The only method that genuinely blends both approaches rather than looking at them side by side.
The Hard-to-measure Channels: A Structural Incrementality Gap
Beyond the prioritisation framework, there is a specific class of channels where incrementality testing is valuable almost regardless of budget size — because the measurement gap is structural, not a data problem:
Linear TV
Builds brand equity over weeks; short-window attribution misses the lag effect entirely. Geo experiments capture the full incremental window.
Connected TV
Cross-device conversion paths break attribution. Incrementality testing at the geo level captures total sales impact regardless of fulfilment channel.
Radio & audioDrives in-store and phone conversions with no digital trace. Incrementality from a geo experiment sees these pathways; attribution cannot.
Brand advertisingLong payback windows and diffuse effects make correlation-based measurement unreliable. Experiments establish the causal baseline.
Retail mediaHalo effects on in-store purchase and competitor switching are systematically missed by in-platform ROAS reporting.
The Goal: A Meta-analytic Incrementality Library
A single experiment is informative. A sustained programme of incrementality testing builds something far more valuable: a growing library of causal benchmarks across channels, formats, regions, and market conditions. Over time, this becomes the most reliable foundation for MMM priors and confident investment decisions, knowledge that cannot be bought off the shelf.
The only evidence stronger than a single well-run incrementality experiment is a body of many. Build towards that library, channel by channel.
Building your incrementality testing roadmap
- Start with the channel where uncertainty × budget is highest
- Use MMM to guide experiment design: duration, market selection, and spend levels
- Feed incrementality results back into the model after each test
- Expand the programme channel by channel, format by format
- Build a benchmark library across geographies, seasons, and competitive contexts
Back to Series overview
Previous article: How Incrementality Measurement Makes Your MMM Sharper
Next article: Building an Always-On Incrementality Programme
