Every channel in your marketing stack is reporting results. The numbers look solid. So why does it still feel like mere guessing? Because the data you’re relying on was never designed to tell you the whole truth.
Thought Leadership - 7 min read - MASS Analytics
There’s a question that sits quietly at the back of almost every marketing meeting: Is any of this actually working?
Not that the campaigns aren’t running and reports aren’t being generated, those things happen. Working in the sense that the money you’re spending is genuinely driving revenue, and you know it with confidence.
For most retailers, the honest answer is: not entirely sure. And that uncertainty is more expensive than most boards realise.
The Measurement Problem Hiding in Plain Sight
Here’s what the typical marketing measurement setup looks like. You have a Meta campaign. Meta’s reporting tells you it generated strong results. You have a Google campaign. Google’s dashboard tells you the same. Your leaflet distribution is measured by your agency, who also tell you it’s performing. Add in a TV or connected TV buy, some in-store promotions, and a few seasonal pushes. And each one, viewed in isolation, looks like it’s doing its job.
But if someone’s marking their own homework, logically you know there’s a problem there to start with.
This is the measurement trap. Every channel is measuring itself, reporting favourably, and the overall picture still doesn’t add up to a clear answer about what’s REALLY driving your sales. The data isn’t lying to you maliciously, it’s simply incomplete. Each platform can only see its own contribution and has every incentive to make that contribution look as large as possible.
It’s a structural problem rather than what we traditionally identify as a technology, or a data one. You’re looking at your business through a series of keyholes, one per channel, and trying to understand a room.
Four Problems That Compound The Confusion
Siloed channel reporting
Each platform measures only its own contribution, ignoring how channels influence each other. The result is systematic misattribution.
Backwards-only thinking
Most measurement tools report on what happened. They don’t tell you what to do differently next time, or what happens if you change the mix.
Stale insights
Traditional analysis often delivers a PowerPoint six months after the fact; too slow for media cycles that move week by week.
Knee-jerk reactions
Without clear data, decisions get made reactively: matching competitor spend, cutting costs that look soft, upping paid search when the month looks short.
The Channels-influence-each-other Problem
There’s a subtler issue that makes siloed reporting particularly misleading: your marketing channels don’t operate independently of each other, but your measurement systems treat them as if they do.
Consider a straightforward example. A retailer runs a TV campaign building brand awareness. Separately, they’re running a paid search campaign. Viewed individually, both look like they’re generating returns. But, in reality, what’s happening is that people who’ve seen the TV ad are more likely to search, therefore more likely to click. The TV campaign is inflating the apparent performance of search. If you cut the TV, search performance drops. But your search dashboard won’t tell you that.
This is the synergy problem. Two media channels operating simultaneously don’t simply add their contributions together, they multiply them. Measuring them separately means you’ll systematically misattribute results, overestimate some channels, underestimate others, and make budget decisions based on a picture that doesn’t match reality.
The Rearview Mirror Problem
Even where measurement is working reasonably well, there’s a second structural limitation: the reports you’re generating are pointing backwards.
Imagine driving a car with only a rearview mirror. You can see exactly where you’ve been. You can even see the turn you missed. But it tells you nothing about what’s coming, and nothing about the best route forward.
Most marketing analytics sits in exactly this position. Attribution models, platform dashboards, campaign post-mortems: they’re all retrospective. They answer “what happened?” reasonably well. They don’t answer “what should we do next?” or “what happens if we change our budget allocation?” or “how will our business respond to a competitor increasing their spend?“
Those forward-looking questions are the ones that actually drive decisions. And they require a fundamentally different kind of measurement.
What This Costs in Practice
The consequences of measurement gaps tend to accumulate quietly. Budgets get allocated based on which channels shout loudest about their own results. Channels that are genuinely driving revenue but are difficult to measure directly: leaflet distribution, brand-building TV, certain forms of out-of-home — get cut because the numbers are harder to defend in a board meeting.
6mo
Typical lag between activity and traditional MMM insight delivery
$14.5M
Revenue impact of a $1.3M cost-saving leaflet cut, discovered too late
18%
Surge in media-driven revenue achievable with optimised channel allocation
The middle figure, the $14.5M revenue impact of cutting leaflets, comes from a real client engagement. A retailer made what looked, on paper, like a sensible cost-saving decision: cutting leaflet distribution saved $1.3M. When the model was built and the analysis run, what became clear was that the saving had triggered a revenue loss of $14.5M. The problem was that without store-level measurement, there was no way to know which areas were load-bearing and which weren’t. Everything got cut, including the parts that mattered enormously.
And that’s what unmeasured marketing costs in actual recoverable revenue.
The Question Every Retailer Should Be Asking
There is a better question than “which channel is performing?” The better question is: what is really driving my sales, and how can I make every marketing dollar work harder?
That question requires a different kind of analysis, one that looks at all your channels together, accounts for the external forces acting on your business (competition, pricing, seasonality, the economy), and produces recommendations you can act on instead of filed reports.
The questions MMM is built to answer
- What is truly driving my sales, and what isn’t?
- How much is competition actually costing me in real revenue terms?
- Which channels are generating the highest ROI, and which have hit diminishing returns?
- If I changed my budget mix, what would happen to revenue?
- Where should I spend more, and where am I wasting money?
MMM is the methodology built to answer these questions. It’s not a new idea, but the way it’s being applied has changed significantly over the years.
What used to be a slow, expensive, consultancy-heavy process that delivered a PowerPoint six months after the fact has been rebuilt from the ground up as an always-on, decision-ready tool.
The next article in this series explains exactly how it works, and why it produces a fundamentally different kind of insight from the channel reporting you’re already receiving.
Back to Series overview Next article: What Is MMM and How Does It Actually Work?
Continue reading the series
Six articles taking you from the measurement problem to practical readiness — written for retail marketing leaders.

