By Dr. Ramla Jarrar, President and Co-Founder of MASS Analytics
and Rick Bruner, CEO and founder of Central Control, Inc.
For decades, advertisers have searched for a universal, trustworthy way to measure the true sales impact of advertising. Every generation of measurement has promised clarity, yet each has encountered hard limits.
Marketing Mix Modeling (MMM) emerged in the pre-internet era as a practical way to understand how advertising investment relates to sales outcomes. Geographic experiments also existed then, but were often small, bespoke, and operationally difficult.
Then, in the late 1990s and early 2000s, the rise of the web brought a seductive idea: that advertising could finally be measured with precise, one-to-one accountability. Clicks, cookies, and conversion pixels seemed to herald a future of deterministic measurement. That future never fully arrived.
Media fragmented across platforms, devices, and ecosystems. Consumer journeys became impossibly complex. Multitouch Attribution (MTA) promised to map these pathways and assign credit across touchpoints, but the approach was ultimately correlation dressed up as causality. It rarely measured what advertisers actually care about: incremental sales lift caused by advertising.
Privacy regulation and tracking loss then accelerated the decline of user-level precision. Match rates between ad exposure and purchase outcomes are often below 50%, and even modest shortfalls make it extremely difficult to measure the small lift effects typical of advertising.
Into this gap, two older disciplines have returned to prominence, stronger, modernized, and increasingly essential:
Marketing Mix Modeling
Randomized Controlled Experiments
And the next evolution is clear: After MMM and MTA comes MPE: Models Plus Experiments.
MMM: Strategic, Holistic, Necessary But Not Measurement
MMM remains one of the most valuable tools in the modern marketing organization. In today’s complex media environment, it offers distinct strengths:
A holistic picture of the entire media mix
Scenario planning to explore alternative investment strategies
A long-horizon, executive-level view of marketing performance
The ability to zoom out for strategy and zoom in for tactical optimization
MMM helps marketers make sense of complexity.
But MMM is still, fundamentally, a modeling approach, not direct measurement.
Every MMM depends on assumptions: functional forms, priors, lag structures, saturation curves, data choices, and philosophical decisions about how advertising works. Different modelers can reach different answers from the same data.
MMM is indispensable, but it is not ground truth.
Experiments: The Gold Standard of Causal Evidence
That is why experiments matter.
Randomized controlled trials (RCTs) remain the most rigorous scientific method for causal inference. They resolve the central problem of marketing analytics: correlation does not imply causation.
Well-designed experiments offer major advantages:
Direct measurement of incremental impact
Results in weeks, not quarters
Applicability across channels, vendors, and KPIs
Transparency, auditability, and replicability
In medicine, the profession embraced “evidence-based practice” and the hierarchy of evidence, placing randomized trials at the top. Advertising faces the same challenge: separating true incrementality from noise, bias, and self-serving metrics.
The same hierarchy applies here: experimental evidence outranks observational inference, and the strongest evidence comes from many randomized trials over time.
The Pinnacle: Models Plus Experiments
The best approach is not MMM or experiments.
It is MMM plus experiments. Like a sweet measurement confection, MMM and experiments go together like chocolate and peanut butter.
With a regular practice of experimentation, across media channels, tactics, formats, regions, and product categories, advertisers can build a true benchmark of what works.
Those results then become:
Validation for MMM outputs
Calibration points for model coefficients
Bayesian priors grounded in real causal evidence
The foundation for more confident scenario planning
MPE is the most rigorous framework available today: the scale of models combined with the credibility of experiments.
Beware of “Experiments” That Aren’t Experiments
Not everything labeled an experiment deserves gold-standard credibility.
Matched market tests, synthetic controls, and other quasi-experimental methods remain assumption-laden and highly sensitive to bias and overfitting. They can be useful when RCTs are truly infeasible, but they are not substitutes for randomization. In advertising, RCTs are rarely infeasible, provided executives are committed to high-quality answers and researchers have the discipline to avoid half-measures.
User-level experiments also face the same identity degradation that undermined MTA. Poor match rates and cross-device fragmentation make them increasingly unreliable for measuring small lift effects.
Why Large-Scale Geo RCTs Are the Best Option
The most robust experimental approach today is large-scale randomized geographic testing.
Avoid identity and privacy issues
Work across virtually all media
Are understandable to executives
Are transparent and replicable
Scale nationally when designed properly
Geo RCTs:
In the US, this means using all 210 DMAs whenever possible, rather than sampling a handful of markets. In other countries, the same principle applies: large-scale randomization across major metropolitan areas, regions, or clusters of postal codes can provide a similarly robust experimental framework, suited to the geographic and media structure of local markets.
The future of ROI measurement is not correlation. It is not black-box attribution. It is not quasi-experimental theatre.
The future is Models Plus Experiments.
Ready to talk about your business?
Whether you’re approaching MMM for the first time or looking to improve an existing measurement programme, we’d be glad to walk through what it would look like for your specific structure.

