This May, we headed to Menorca for the I-COM Summit Experience — a beautiful place with a very focused mission. I-COM brings together people who really care about smart data marketing, and they do it in a personal, open setting. It’s not a huge conference where everyone gets lost in the noise. It’s the opposite: small enough that you can actually talk, debate, learn, and build real connections.
We were proud to see our President, Dr. Ramla Jarrar, join the panel on Calibrating MMM With Experiments. She spoke alongside:
• Rick Bruner, CEO at Central Control
• Gregg Nathan, SVP Head of Marketing Measurement & Analytics at Fidelity Investments
• Dan Hagen, Global Chief Data & Technology Officer at Havas
It’s a topic that’s close to our heart at MASS Analytics: how MMM and experimentation can — and should — work together.
What Ramla shared on stage
Ramla spoke about how today’s MMM is very different from the older, slow-moving models many marketers remember. Brands don’t want annual answers anymore. They want models they can update every few weeks or months, so they can make decisions on time, not months later. MMM has grown up — it’s faster, more granular, and much more practical.
But even strong models need something else: calibration.
That’s where experiments come in.
Ramla explained that MMM gives the full picture, across all channels, tactics, and external factors. Experiments zoom in on one specific change, so you can see the impact clearly. Put them together, and you get a system that’s both broad and precise.
She also talked about what happens when a model and an experiment don’t agree. And yes, it happens more than people think. A disagreement isn’t a failure — it’s a signal to go deeper, check what’s missing, and learn something new. Sometimes the model needs to be adjusted. Sometimes the experiment wasn’t clean. Sometimes there’s an unseen factor hiding in the data. The point is: tension between the two is useful.
Another big topic was data quality. Even with all the right tools, bad data can ruin everything: missing spend, messy timestamps, wrong attributions. Calibration helps, but good inputs still matter.
Someone raised the article where Mark Zuckerberg said Meta could replace the whole ad measurement ecosystem with their own AI — basically suggesting that you don’t need measurement at all anymore. Ramla’s take was simple: measurement isn’t just about reading a result, it’s about understanding what’s happening and learning from it. Without that, you’re just taking someone’s word for it. No brand should fly blind.
What made I-COM special
This wasn’t just a panel. The whole event is built around small conversations, honest debates, and real knowledge-sharing. There are keynotes and roundtables, but also informal moments where ideas actually stick.
For us, it was inspiring to hear how many brands are now building internal expertise — especially in places like the UK, where in-housing MMM is growing fast. People want models they can understand and control, not black boxes on the outside.
Models plus experiments — or “MPE” as the panel called it — isn’t just a trend. It’s becoming a standard. More brands are making calibration part of their always-on measurement plans, instead of a one-time test.
A big thank you
Thank you to I-COM for hosting such a thoughtful event, and to Rick, Gregg, and Dan for a panel full of honesty and practical ideas. It’s refreshing to be in a space where people are not just talking about measurement — they’re improving it.
We left Menorca with new ideas, new partners, and more confidence that MMM is only getting stronger. Faster models, better calibration, cleaner data, and a lot more openness. It’s a good direction.
And we’re glad to be part of it.

