MMM and incrementality are not competing approaches.They are complementary. MMM explains how advertising works and incrementality adds further precision at the decision level, helping validate impact in areas where MMM has less visibility, such as new or fast-changing channels.
Thought Leadership - 7 min read - MASS Analytics
Incrementality is the question underneath every marketing budget decision: how much of this revenue would have happened without us? Not “which channels look like they’re performing” , but what did our marketing actually, causally produce?
Marketing Mix Modeling answers an enormous amount. It gives you a holistic view of what’s driving sales, lets you model future scenarios, and tells you how channels interact over time. For most organisations it is the most complete picture of performance available. But MMM works through statistical inference: it identifies patterns and attributes causes from correlation. And correlation, however sophisticated the statistics, is not proof of cause.
Why The Distinction Matters
Consider a TV campaign. Sales go up in the weeks it runs. The MMM model attributes a contribution to TV. But was it the TV that caused the uplift? Or was there a seasonal effect already coming, a competitor promotion ending, a pricing change that week? The model separates these as best it can, but it is working from patterns in historical data. It cannot run a control group.
Incrementality is what your advertising caused. Everything else like correlation, attribution, and platform reporting are proxies for it. Only controlled measurement gets you to the real number.
This distinction has teeth in practice. A channel can look high-ROI in your MMM and yet be capturing sales that would have happened regardless. It can look low-ROI and be genuinely driving significant incremental revenue through a pathway the model can’t fully see. Acting on the wrong read is expensive, and avoidable.
Where The Gap Creates Risk
Channels with low spend variation
When a channel’s budget barely changes over the modelling period, MMM struggles to isolate its specific contribution. The coefficient is uncertain. Incrementality testing provides a direct read where the model can’t.
New channels without history
MMM needs roughly two to three years of data to build reliable priors for a channel. For a new format or platform, those priors don’t exist yet. An experiment gives you the first causal benchmark before you scale spend.
Hard-to-measure media
TV, radio, OOH, and brand advertising drive purchases through pathways attribution can’t track — offline conversions, delayed decisions, cross-device paths. The true incremental effect is systematically underestimated.
Budget decisions under pressure
When finance asks for cuts, the channels that look weakest in the model are the first on the list. Without incrementality evidence, you may be cutting what’s actually driving your business.
What Incrementality Measurement Adds
Incrementality measurement doesn’t replace MMM. It gives MMM its ground truth. The two methods cover each other’s blind spots.
MMM provides the strategic map: how channels interact, how saturation develops over time, how to plan and scenario-model at the portfolio level. Incrementality measurement provides the causal validation points: controlled experiments that establish what specific channels actually caused, precise enough to anchor the model’s coefficients in observed reality rather than inferred patterns.
MMM and Experiments: A Two-way Relationship
One thing that often goes underappreciated is that the relationship between MMM and incrementality experiments runs in both directions. Experiment results calibrate the model. But the model also improves the experiments — guiding how many weeks a test needs to run, which markets to use as test and control, and what level of spend variation is needed to detect a meaningful signal. Done well, each makes the other more useful.
The next article explains exactly how geo experiments work — what makes a result trustworthy, what can go wrong, and what good experimental design looks like in practice.
The questions this series answers
- What is incrementality and why does it matter beyond what MMM already tells you?
- How do geo experiments measure incrementality, and what makes one trustworthy?
- How do experiment results feed back into MMM to sharpen its coefficients?
- Which channels should be prioritised for incrementality testing, and in what order?
- What does a continuous incrementality measurement programme look like in practice?
