Webinar: Calibrating Mix Models With Ad Experiments

Here’s a quick recap of what went down when Ramla Jarrar (MASS Analytics), Rick Bruner (Central Control), and Talgat Mussin came together to talk about one of the biggest shifts happening in marketing measurement today: bringing Marketing Mix Modeling and ad experiments together.

YouTube video

Why this topic matters

For years, marketers have been stuck trying to separate what’s truly working from what just looks like it’s working. We’ve relied on clicks, correlations, and attribution models. But those don’t always show cause and effect. As Rick put it, “The purpose of advertising isn’t to predict who’s going to convert, it’s to persuade.”

That’s where experiments come in. Done right, experiments can help marketers get closer to the truth by showing how one action actually causes an outcome. Combine that with MMM, and you’ve got a much stronger, evidence-based understanding of what drives performance.

Correlation vs. Causation — and why we should care

Rick kicked things off with a short trip back to the roots of scientific thinking, quoting philosopher Immanuel Kant’s “Dare to know” and poet William Blake’s “The true method of knowledge is experiment.”

His main message: marketing is full of correlations, but not all correlations mean causation. The only way to truly prove cause and effect is through proper experimentation, ideally using randomized control groups that eliminate bias and unknown variables.

He reminded us that “AI is just a faster form of correlation,” not a replacement for real experimentation. The future, he said, isn’t about choosing between MMM and experiments, but using both together: “MMM + Experiments,” or what he called MPE.

How to design a good experiment

Talgat then dove into the “how.” He explained the difference between user-based and geo-based experiments, and why the industry is rediscovering the power of geo experiments — especially with privacy regulations, ad blockers, and the messy reality of multi-device users.

He traced the history of experiments from Procter & Gamble’s early “Wash Day” campaign in the 1950s to today’s sophisticated geo-testing frameworks. What stood out most was his call for scientific discipline: randomization, balance, and clean data collection.

He also talked about the importance of understanding incrementality and marginality — or in simpler terms, figuring out both the total impact of your marketing and what happens when you spend just a bit more. His takeaway? Good experiments don’t chase users or cookies. They’re privacy-safe, data-light, and focused on real, top-line outcomes.

From experiments to MMM calibration

Then Ramla brought it all together. She walked through how experiment results can actually be used to calibrate MMMs, helping models get closer to reality.

She broke it down into three approaches:

  1. Qualitative calibration – comparing MMM results with experiments to spot differences and learn from them.
  2. Model selection calibration – choosing MMMs that best align with experiment outcomes.
  3. Full integration – the gold standard, where experiment data (like incrementality, ROAS, and saturation curves) are actually embedded into the MMM model as priors.

She shared a real client example where this integration led to a 26% increase in measured contribution, a 24% improvement in ROI, and a 33% bigger budget share for the tested channel — proof that calibrating MMMs with experiments brings results closer to ground truth, both statistically and from a business perspective.

Ramla also shared a few practical lessons:

  • Make sure your experiment and MMM are using the same definitions and metrics (e.g. spend vs clicks, daily vs weekly data).
  • Don’t integrate experiment results blindly — always check how they were run.
  • “A bad experiment is worse than no experiment,” she said. “If you don’t know how it was done, don’t use it.”

Final takeaways

Talgat wrapped up the session with a few key lessons for anyone looking to try this approach:

  • Collaborate closely across teams (data, media, agencies) to avoid mistakes.
  • Be transparent with methods and data sharing.
  • Use MMM to inform what to test next, they feed each other.
  • And above all, do more experiments. Start big, learn fast, and refine as you go.

As Rick summed up, “All models are wrong, but some are useful.” The point isn’t perfection, it’s progress. By combining the storytelling power of MMM with the scientific rigor of experiments, marketers can finally move from guessing to knowing what really drives growth.