Somewhere in the last 18 months, open-source marketing mix modelling entered the conversation at your organisation. Maybe it came from a data scientist who’d been experimenting with PyMC-Marketing or Google Meridian. Maybe a consultant mentioned that credible MMM frameworks are now free to use. Maybe your CFO asked why you are paying for MMM software when open-source tools exist.
It is a fair question. And it deserves a fair answer.
This is not a vendor brochure that dismisses free tools to sell you something. It is an honest look at what open-source MMM actually delivers, who it serves well, where it creates problems, and how to decide which path is right for your team.
“The barrier to starting a marketing mix modelling programme has never been lower. That is genuinely a good thing for the industry. It is also where the nuance begins.”
Why open-source MMM arrived when it did
Open-source MMM did not emerge from a research lab. It emerged from a business problem.
In 2021, Apple’s iOS 14 update ended user-level tracking across mobile apps. For Meta, this meant the attribution data that powered their ad measurement model degraded overnight. Last-click attribution began routing even more credit to lower-funnel channels like Google Paid Search, at the expense of Meta’s upper-funnel placements.
Meta needed a way to make the broader business case for their channels. Their answer was Robyn: an open-source MMM library released to the market that any advertiser could use to model the full-funnel contribution of their media spend, Meta channels included. Robyn’s release changed the market. Tt legitimised MMM for a much wider audience and reduced the cost barrier to entry significantly. Meta has since discontinued active development of Robyn, but its influence on what followed is hard to overstate.
Other organisations moved quickly to fill the space. Google released Meridian, its own open-source marketing mix modelling package, officially in January 2025. PyMC Labs built PyMC-Marketing, a Bayesian MMM library with a strong and active open-source community. Uber contributed Orbit KTR.
Today, the two most actively maintained and widely adopted open-source MMM frameworks are Google Meridian and PyMC-Marketing. That is genuinely a good thing for the industry, and it is also where the nuance begins.
What open-source MMM actually is (and what it isn’t)
Before evaluating open-source tools, it helps to be precise about what they are and what they are not.
Meridian, PyMC-Marketing, and (historically) Robyn are all model-building libraries. They give a data science team a methodology, a set of statistical techniques, and code infrastructure to construct a marketing mix model from scratch. They are not complete marketing effectiveness measurement platforms.
They do not come with a user interface, a data preparation layer, built-in forecasting, or structured budget optimisation tools. What they provide is a rigorous foundation that a skilled team can build on.
Google Meridian
Released globally in January 2025. Built on Bayesian causal inference, integrates with Google’s MMM Data Platform, and supports non-media variables like pricing and promotions. Scenario Planner (launched Feb 2026, open beta) adds a no-code interface, but the data science model build remains a prerequisite.
PyMC-Marketing
A Python-based Bayesian MMM library with an active open-source community and strong adoption among data science teams. Highly flexible and well-documented. No native UI or budget optimisation tooling. The implementation and output translation remain your team’s responsibility.
Robyn
Meta’s original open-source MMM library. Widely cited as the tool that brought open-source MMM into the mainstream. Meta has since discontinued active development. Legacy implementations remain in use, but Robyn is no longer the recommended starting point for new programmes.
Both Meridian and PyMC-Marketing are genuinely capable frameworks for Bayesian marketing mix modelling. A competent data science team can produce quality outputs with either. The question is not whether the models work; because they do. The question is what it costs to make them work at the level your business actually needs.
The real cost of open-source MMM
Here is the gap that open-source documentation does not make explicit.
A model that runs is not a measurement programme. The moment a marketing mix model produces outputs, the real operational work starts: interpreting what the outputs mean, validating the model against held-out data, building forecasts for next quarter’s planning cycle, running budget optimisation scenarios, updating the model when the media mix changes, explaining results to a CMO who does not read Python notebooks.
Open‑source tools provide none of this infrastructure. They deliver the statistical engine only. Everything downstream, from the interface and workflow to the interpretation layer, cross‑functional communication, and governance, becomes your team’s responsibility to build and maintain.
This is where the total cost of ownership of open-source MMM inverts.
Data engineering hours
Building and maintaining the pipeline that feeds the model is another challenge. Open‑source frameworks expect clean, structured inputs. Creating those inputs from raw media and sales data takes time and keeping them up to date every time a channel changes takes even more.
Data science time (6–16 weeks for first model)
Based on Deloitte’s MMM internationalisation framework, a first validated model takes 6 to 16 weeks from kick-off to production. That range assumes a competent, focused team. Rerunning with updated data takes ongoing resource on top.
Tooling build
Budget optimisation scenarios, forecast outputs, and sensitivity analyses all need to be translated into formats your media planners and CMO can work with. In most open-source implementations, this is custom-built.
Dependency on team continuity
Open-source MMM programmes are vulnerable to knowledge concentration. When the data scientist who built the model leaves, institutional knowledge leaves with them. Rebuilding is expensive. This is the most common failure mode in in-house open-source programmes.
Deloitte Research Finding
Deloitte’s 2024 analysis of open-source MMM in-housing identifies this risk directly. Their framework notes that “failure rate in the internalisation of MMM techniques grows exponentially in absence of a clear strategy, with some MMM structures collapsing even before making it to production.”
The in-housing path is viable, but only for organisations that can commit the resource, governance, and team continuity it requires.
How to choose
The right question is not “open source or commercial?” The right question is: what does your marketing mix modelling programme need to do, and what do you have to build it with?
Open-source MMM is the right starting point if your team is entering MMM for the first time, your data science capability is strong, and your primary goal is learning and exploration. It is also the right tool if budget constraints are hard and you can absorb the build and maintenance cost.
If your goal is a production-ready, continuously updated MMM programme that your full commercial team can act on, without rebuilding the infrastructure every time something changes or the calculus looks different.
If you are evaluating both paths seriously, the most useful thing you can do is map the full total cost of ownership for each. Not just the licence fee, but the data engineering, the modelling time, the communication layer, and the maintenance burden. The gap between the two options usually narrows significantly when that analysis is done honestly.

