Store-level retail measurement concept showing granular data points across multiple store locations revealing hidden revenue drivers invisible in national averages

Leaflets: What a $14.5M Mistake Teaches Us About Retail Measurement 

A retailer cut their leaflet spend to save money. On paper, it looked like the right call. The model told a very different story, and the difference between those two versions of reality cost $14.5 million in revenue.. 

Case Study  -  8 min read  -  MASS Analytics 

Every retail business faces the same pressure at some point: reduce costs without damaging revenue. It’s a reasonable goal. The challenge is knowing which costs are safe to cut, and which are heavy walls you can’t remove without the ceiling coming down. 

Leaflet distribution: circulars, flyers, printed promotional material, has been a target for cost reduction in grocery and essential retail for years. Digital marketing is trackable, attributable, and increasingly central to most media plans. Printing and distributing physical leaflets are expensive, manual, and difficult to measure with precision. The logic for cutting them feels sound. 

This is the story of what happens when that logic runs ahead of the data.

The Decision That Made Sense in the Boardroom

A large US retailer, operating hundreds of stores across multiple markets, decided to reduce their leaflet distribution programme. The rationale was straightforward: distribution costs were significant, digital channels were growing, and the direct attribution of leaflets to sales was hard to demonstrate with the measurement tools they had available. 

The saving was $1.3 million. On the P&L, it looked like a win. 

Apparent saving:

$1.3M 

Reduction in leaflet distribution spend. Visible in the budget. Reported to the board as a cost efficiency measure. 

Actual revenue impact: 

$14.5M 

Revenue lost as a direct consequence of the cut. Not visible until the model was built and not attributed to the decision until too late. 

The revenue drop that followed wasn’t immediate and wasn’t attributed to the leaflet cut in real time. Sales declined, and the reasons weren’t obvious. There was competition in some markets, some seasonal softness, the usual noise. It was only when the marketing mix model was built, looking back over the full data set, accounting for all the factors simultaneously, that the shape of what happened became clear.

What the Model Found

The model was built at store level. This is a critical detail. A model built at a national or regional level would have produced averages, and averages would have obscured the truth. Because the truth wasn’t uniform across the estate. 

What store-level modelling revealed 

Leaflet distribution was not equally effective across all stores. In some locations: certain DMAs, certain store formats, certain customer demographics, it was barely moving the needle. In others, it was one of the highest-ROI activities in the entire marketing mix. Cutting it uniformly had saved money in places where saving made sense and destroyed value in places where it absolutely did not. 

This is the kind of insight that’s only available when you’re measuring at the right level of granularity. A national-level analysis might show that leaflets, on average, are a reasonable investment. It can’t show you that they’re a critical driver for 40% of your stores and largely irrelevant for another 30%. Only a store-level model can do that.

What Was Recommended and What Happened Next

The model didn’t recommend reversing the cut entirely. That would have been as blunt an instrument as the original decision. Instead, the recommendations were surgical: 

01 

Slow the pace of the reduction. Don’t accelerate it, and don’t reverse it all at once. 

02 

Use the store-level data to identify specifically where distribution had the highest revenue impact and protect spend in those locations. 

03 

Identify the stores and DMAs where leaflets demonstrably weren’t working and direct the savings from those cuts toward channels with higher ROI potential. 

04 

Use the channel that emerged as an underexplored opportunity — Connected TV — to run controlled experiments in selected markets, building the evidence base for broader investment. 

The Connected TV finding is worth dwelling on. It emerged not just from the model’s analysis of existing channels, but from monitoring brand consideration data alongside the media data. The model identified a period of increased competitor activity and a corresponding dip in brand consideration, and then flagged Connected TV as a channel with the reach and targeting capability to address it in markets where leaflets were being reduced. 

Experiments were run. The results were incorporated into a refreshed model. And the outcome of applying the full set of recommendations was an 18% increase in media-driven revenue: $30 million in additional sales, and a return on advertising spend of 15.5 , up from 13.5 the year before. 

The goal was never “spend more.” It was “spend smarter” and the model made the difference between those two things visible. 

The Lessons That Apply to Any Retailer

The leaflet story isn’t really about leaflets. It’s about three things that apply across any retail marketing decision. 

First, the danger of measuring averages. Averages hide the stores that matter most, the channels that are genuinely working, and the locations where you’re wasting money. Store-level and DMA-level analysis is not some luxury, it’s where the actionable truth lives. 

Second, the cost of invisible causality. When you can’t see what’s driving your sales, you make decisions based on what’s visible, and what’s visible (the cost line) is often not what matters most (the revenue line). A $1.3M saving that generates a $14.5M revenue loss is a bad trade by any measure. The only way to know it’s a bad trade is to have the model that connects those two numbers. 

Third, the difference between cutting and optimising. The right answer to “our leaflet spend is hard to justify” is rarely a simple “cut it.” It’s “measure it properly, identify where it’s working and where it isn’t, and make targeted decisions.” That’s not to complicate the answer, it’s just accurate. And it’s the difference between a saving that harms the business and one that funds genuinely better alternatives. 

Questions this kind of analysis can answer for your business 

  • Which of our stores rely most heavily on leaflet distribution for footfall? 
  • Are we over-distributing in areas where customers shop regardless? 
  • Where could we reallocate leaflet budget for a higher return? 
  • What channels are underperforming relative to their potential, and which are in diminishing returns? 

The next article goes deeper into three of the key analytical outputs: ROI by channel, response curves, and cross-channel synergy, and explains what each one means for how you allocate your media budget. 

Previous article: What is MMM and How Does It Actually Work

Next article: Reading The Signals: ROI, Response Curves, and Synergy

Continue reading the series 

Six articles taking you from the measurement problem to practical readiness — written for retail marketing leaders. 

View all articles