Marketing Mix Modeling (MMM), is primarily concerned with using historical data to explain the incremental contribution of the different sales drivers. This allows, among other things, to calculate the Return on Investments (ROI) of every channel in the marketing mix, including media, over the historical modeling period.
But can Marketing Mix Modeling also predict the future?
Marketing Mix Modeling has already established itself as an indispensable tool to provide valuable learnings and insights from the past to help CMOs make informed decisions and wisely design and execute future marketing strategies. In addition, these CMOs also expect Marketing Mix Modeling to tell how much revenue will be generated by every channel and hence predict, to certainly acceptable accuracy, the overall sales over one time period.
So, is Marketing Mix Modeling merely a descriptive analytics tool or is it also one of those fancy predictive machine learning techniques everyone is talking about?
Well, the answer is yes … and no. In fact, it depends!
Marketing Mix Modeling is based on regression analysis which is one of the most popular techniques in statistics and machine learning. It is also a well-studied and understood methodology that gained in maturity and robustness over many decades through intensive research and wide use in various applications and fields.
In fact, regression analysis is based on estimation theories and on the glamorous and well-established Central Limit Theorem. These mathematical tools permit to quantify the error in estimating the unknown parameter thus allow to reduce that error if the right estimator is used.
It’s All About Sampling!
To simplify the concept, most of us would understand that taking a sample from a population and calculating the average of say, the age, will give a good indication of the average of the full population.
This would be so if some important criteria like the sample size or the level of variation in the population are considered.
Consequently, if we take a new sample and calculate the average age again, the result will be close enough to the estimate we already know. The Central Limit Theorem allows us to be, say, 95% confident that the average age calculated from any randomly selected sample will be within a predefined “Confidence Interval”.
This powerful mathematical tool is the same one that allows statisticians to very quickly and very precisely tell, for instance, the results of an election just by using exit polls based on a very small sample of voters.
Regression analysis is based on a similar concept.
Instead of using the sample mean as an estimator, it uses another more complex mathematical formula called Ordinary Least Squares (OLS) or other variations of it. The OLS would use one sample, in this case, the historical data at hand, to estimate the coefficients of the different variables included in the model. The coefficients are the parameters we are seeking to estimate and are analogous to the average population age or the score obtained by a presidential election’s candidate. The coefficient of a variable would represent for example the incremental revenue generated by a media channel for every dollar spent or the elasticity of prices etc.
If the marketing manager trusts the analyst creating the Marketing Mix Model to explain the past, then he should also trust he can apply the same estimated coefficients on a new sample in the future. If those coefficients cannot be trusted to estimate the revenue for the next few months, then they equally should not be trusted to explain what happened in the past. After all, by design, the underlying assumption is that the incremental impact of every factor is supposed to be stable over the previous year, the year before and the year before (i.e. the fit period). Consequently, it is expected to remain stable at least over the next few months as well.
However, there are some conditions to be respected.
First, it is crucial that the variables in the model are all statistically significant.
That is, their impact on sales as observed in the training data (i.e. the sample) is genuine and not the result of random circumstances unlikely to be reproduced in the future. As a rule of thumb, the t-stat, that measures the significance of the impact of those variables, should be above two. If this rule was not respected, then the confidence intervals of the coefficients risk being very wide. Hence, the actual values we are trying to estimate may fluctuate widely depending on the input data resulting in very unexpected prediction results.
Second, for the results to be accurate, the sample used should be representative of the overall population.
Equally important, the historical data used to estimate the Marketing Mix Model should be representative of “what usually happens” or would happen in the future. So, for example, the creative used in the ads should remain the “same”. Albeit, in practice, “the same” actually means “close enough”. That is, the same content is used, the same laydown, the same strategy is maintained, etc. The economic situation also should permit to assume that the same consumer behavior is maintained hence the same responses e.g. to media, promotions, and other stimuli.
For this reason, the prediction would usually be trusted just for a short period in the future, say a few months. Consequently, it is recommended to refresh the models quite frequently, ideally, monthly or quarterly. Refreshing the models does not mean necessarily redoing the models but it should suffice to merely update the coefficients of the same models with the new data and keep an eye on the different statistical metrics used to assess the quality of those models.
Refreshing the models at this frequency should be possible if the advertisers, or their analytics agencies, have the right tools to reduce the cost of the usually expensive Marketing Mix Model projects. How to do that? An article to address this topic in detail will follow soon.
Dr. Firas Jabloun,
Chief Technology Officer, MASS Analytics