MMM easy button: how to hit the easy button for media mix modeling
MMM is notoriously difficult.
You can take months gathering all your data, building models, adding external data, configuring boundaries, and running MMM … only to end up with obviously insane results. Or, results that are so sanity checked and artificially bounded that they deliver only what you pretty much already know.
So … how do you hit the easy button for media mix modeling?
MMM easy button: Singular?
The simple answer is: use Singular as your MMP. This might sound super self-serving (and maybe it is) but that doesn’t mean it’s wrong.
Here’s why Singular is the easy button for MMM:
- Total spend coverage out of the box from thousands of ad partners, zero configuration required. Result: hundreds of hours saved.
- Fully automated onboarding because Singular already has all of your historical data. This takes a couple of days but all the prep work is on Singular’s side … not yours.
- Market trends baked in thanks to the fact that Singular is one of the top global marketing measurement companies. We see billions of installs and trillions of clicks, meaning when there’s a holiday, event, disaster, trend, or any other spike or drop due to seasonality or anything else, we see it. And you don’t have to input all that global environment data and then make semi-rational guesses about how it impacts your app.
- Instant comparison with direct attribution data thanks to Singular managing your marketing measurement and collecting first-party data via SKAN, IDFA, GAID, Privacy Sandbox (in time), in-app events, ad network clicks and impressions … you name it. This drives accurate and meaningful results thanks to world-class MMM model calibration.
“When we designed the product, we wanted to rely on our strengths and our real differentiation in the market,” Singular CTO Eran Friedman told me in a recent Growth Masterminds interview. “For MMM to be accurate, you want to make sure that you have 100% coverage and that you have a lot of history. And just collecting all these data sources, preparing them, making it formatted for the MMM to start processing, that can take ages.”
If you’re running MMM on your own, the data sources are a headache. The calibration is a problem. One of the biggest challenges, however, is the market trends. Unless you have access to excellent and relevant trend data, know how to add that into the MMM model, and guess accurately how heavily to weigh that data … it’s a nightmare.
Which means that seemingly organic spikes or dips in install volume are mysteries.
One example: The new Threads app.
100 million installs in a week for a brand-new app causes a disturbance in the mobile ecosystem. Which is why one app — also called Threads — quickly updated its name to “Threads – Not By Instagram.”
“Unless you know to input these per country, basically in your model, you might think that was an effect of your marketing spend, while in truth, that was actually an external factor that affected your results,” says Friedman. “So just seeing that is really fascinating to see how it affects the outcomes.”
The Singular MMM model: Robyn? Lightweight MMM?
Singular’s not using the open-sourced Robyn or Lightweight MMM, though it’s learning from both.
“We’re definitely heavily inspired by Robyn and Lightweight,” Friedman says. “After assessing all of these, we’ve ended up building our own model with some of the similar concepts. We tried to take some of the advantages for each one. But we felt that there is some more customization that we can do based on our knowledge of the industry.”
Trends is one example.
Rather than a binary yes/no trend input, Singular MMM uses a time-weighted regression model plus a proprietary Bayesian model with a few special twists.
Client results on Singular MMM: surprises
Not only is MMM challenging and difficult to implement on your own, adding yet another measurement capability for your user acquisition or growth team poses its own additional challenges.
- What do you use it for?
- Who uses it?
- How often do you check it?
- What decisions does it help you make?
- What decisions does it not provide insight on?
- If it contradicts your last-click direct attribution data, which do you trust?
It’s the 2 clock problem. When you have 1, you know what time it is. Having 2 that might slightly (or greatly) differ makes you wonder.
The feedback has been surprisingly good, Friedman says.
“Some of the first feedback we got was, wow, that really aligns with the suspicions that we had internally about some of the differences between the channels … they might have a suspicion that a particular channel is being miscredited by last click based attribution and actually contributes more to the bottom line. But it’s really hard to prove without doing specific incrementality tests or like a general test.”
That’s generating more confidence in decision-making and budget allocation going forward.
“You see the results and actually see the difference between these,” Friedman says. “And that makes the team much more confident that, wow, there seems to be a significant impact there. Instead of being kind of suspicious and kind of getting into a dilemma, oh, no, what am I going to do? What am I going to rely on? It actually makes them more confident that it’s worth doubling down on testing that thesis, maybe really running that incremental test and seeing if it’s driving better results.”
Customers testing Singular MMM are consistently reporting interesting and valuable results, he adds.
That’s great, but the second-best response has been about the amount of work needed to get these kinds of results: almost none.
For most Singular clients who have tried MMM, it’s been a huge lift to get the data, maintain the data, build the model, maintain the model … and then get mediocre or inconsistent results. Now they’re getting MMM reports painlessly. Out of the box. Instantly.
“That’s been mind blowing for some of these, right?” Friedman says. “So just, okay, fully automated, they didn’t need to do anything, just getting those initial insights. That’s been really amazing. They can suddenly dive into the results and from there kind of think about how to reiterate it.”
Much more in the full podcast and video
Don’t miss this Growth Masterminds episode, or any of the others …
Subscribe to Singular’s YouTube and to the audio podcasts to never miss a show. Recent guests have included topics like:
- E-commerce and in-game fraud
- Alt-UA with Fluent co-founder Matt Conlin
- Power shifts in the ecosystem with guests like InMobi’s chief business officer Kunal Nagpal
- Generative AI in games with Unity CEO John Riccitiello
- Massive games growth with Dive’s Elad Levy (who sold his games company to Playtika)
- Privacy Sandbox on Android
- Targeting in the era of privacy
- Reducing app subscription cancellations
- And … so … much … more!
Hey: need a transcript because you read faster than you watch? Here’s a full transcript, just for you
Please note it’s largely AI-generated. Listen or watch to get the exact message.
John Koetsier:
Have we found the easy button for MMM? Hello and welcome to Growth Masterminds. My name is John Koetsier.
I’ve never heard triple-M, media mix modeling, marketing mix modeling described as easy. And even saying it right now, it kind of feels like a bit of a stretch … but we may have found the closest thing to an easy button in a new beta program from Singular.
To dive in and learn more, we’re joined by CTO, Eran Friedman. Welcome, Eran.
Eran Friedman:
Thank you for having me, John. Great to be here.
John Koetsier:
Awesome, always great to have you. You’re probably like, this is the fourth time I’m on, people are gonna get bored!
Well … probably not, cause you’ve got good insights, so that’s excellent. Let’s just ask the stupid question right up front.
Have we found the easy button?
Eran Friedman:
I think we definitely made it much easier. That’s what I believe. And there’s a lot of potential for even additional improvements.
But I think with the technology advancements that we’ve had in the last few years and all the investment we’ve done in Singular, I think it’s definitely significantly easier than what we had before, for sure.
John Koetsier:
So before we go into the details of why it’s easier, what you’ve done, how it works and all that stuff … big picture, what’s hard about MMM?
Eran Friedman:
So I think if I tried to break it down, anyone who played around with MMM, is probably very familiar with the common challenges out there, kind of starting from actually getting the data.
So MMM is, you know, a very data science heavy type of methodology. And it’s based on taking a lot of historical data. And putting it in the model and building a model on top of it. Whenever you speak with any data scientists, you try to play around with it. They said, okay, the first thing you need to do is really prepare the data. And that’s a lot of hard work.
For MMM to be accurate, you want to make sure that you have 100% coverage and that you have a lot of history. And just collecting all these data sources, preparing them, making it formatted for the MMM to start processing, that can take ages.
So I think that’s the first part to even start working.
Then the next thing is really focusing on the iteration and testing of the data. There’s a lot of consideration for accuracy or confidence in the results of the MMM. There are many methodologies out there, like techniques, such as doing geolift or incrementality tests, you kind of find ground truth.
There’s a lot of parameters that you can look into, like error metrics that you can basically measure to understand how close you are … how confident the model is in the results. But there’s basically a lot of tweaking and iteration to do that. The next piece, if you’re getting to something that you really feel confident about, is really the granularity. You know, it’s easy to just take a high level look.
It’s the first part to take kind of a high level metric and kind of figure out … what’s your total budget, how does it affect your total outcome, maybe.
But then if you want to get more granular, think about it as a country level segment, like a source break or even deeper than that, then again, it requires a lot more data and a lot of more tweaking for each one of the segments that you’re trying to process.
And that doesn’t even get me to thinking about any of the other external factors that might be affecting your numbers.
So MMM has a lot of reliance on it based on having as much information as possible on what’s happening. So even if you know are the things that you’ve been doing that may affect the results, there’s also the consideration of what else might be happening outside, like with your competitors, with the industry in general that might be affecting the results as well. So these are extremely hard to kind of come by collect to really understand … okay … what’s the bottom line result and how accurate is it?
So some of the examples of the challenges that we’ve been hearing a lot from anyone who really tried to play around …
John Koetsier:
This is making me tired already. I mean, like …
Eran Friedman:
Hahaha!
John Koetsier:
… this is definitely something that we need an easy button for because none of those challenges are quickly solvable.
You could spend months gathering all your data and building your models and testing. And then you’re like, what did I get from this? What did I achieve?
Okay … so … you didn’t say off the top that you found an easy button, but you said that you’ve found a way to make it significantly easier. How has Singular made it significantly easier?
Eran Friedman:
Right. So I think we really wanted to, when we designed the product, we wanted to rely on our strengths and our real differentiation in the market.
So Singular … we’ve been for years expanding our data collection, a core product, which is really focused on getting the full coverage across any type of spend that you might have in your marketing mix. That’s already been a core part of our product for years, right? So this has been our commitment to our customers to really collect any type of, you know, media channel integration, Excel file, whatever it is. Like we always want to cover 100% of your media budget. And that’s a core part of our product.
And that also means that all of our MMM customers get full coverage of their budget out of the box. It’s already built into the product.
More than that, we also have for our customers historical data, and we can also onboard historical data for our customers.
This is essential for MMM. It requires at least a year or two years of data to start processing the information in an accurate way. If you have less than a year, then you can run some testing to calibrate the model.
But definitely that’s also the fact that … we are already the measurement and data platform for our customers, which means that we already have our historical data. And we can input it pretty much automatically into the process too.
So we’re starting from the coverage and the historical data that is already available out of the box for us and that all our customers can enjoy.
Now, if we covered the data piece, then we’re getting to the calibration piece. So how do you know whether it’s accurate … or how do you calibrate it? And we’ll probably talk about it soon, but in any kind of open source framework that you’re using, like Robyn, Lightweight MMM, and such, you basically need to configure boundaries for the results. Because whenever you run MMM, you can get pretty crazy outputs. It can give you negative numbers. It can give you thousands of percent of ROI … the numbers that don’t make sense, right?
And it’s the job of the data scientists usually to basically put some hard-coded numbers to put those safeguards for them and to show, okay, what’s the general areas that they should be focusing on.
Now, again, the advantage that we also have as an MMP is that we have additional data points that we can use for calibration. Like if, for example, you’re running digital spend, it’s part of your media mix, and that’s usually the case, you can actually rely on the attribution results, the direct attribution results, to help calibrate the model … putting in those boundaries, basically.
So not providing the ground truth, because it’s not necessarily the ground truth, but giving you those general areas of where the MMM should kind of focus on and help calibrating the results.
John Koetsier:
Basically a sanity check.
Eran Friedman:
Exactly.
And then if we get to the last piece, the last challenge, which is really, you know, impractical typically for any data scientists you’re trying to build it in house is what’s happening outside of their existing efforts.
So what’s happening with the competitors … what’s happening in the industry. Something that if you use your own internal solution, no matter what open source framework you would use, you don’t really have access to. So when using the open source frameworks out there, they give you the option to manually input anything that you think might have an effect on your data, such as putting important dates of holidays or specific times in which things might have happened.
It’s so binary and pretty simplified.
John Koetsier:
Mm-hmm.
Eran Friedman:
The fact that we have a lot of data across many customers is part of our benchmarks data. We can actually automatically detect those trends and know per segment exactly what’s the effect of each special date that they have.
From analyzing the trends in the data, you can actually see, okay, this date was actually Christmas. This date was actually Thanksgiving. This was when the Super Bowl happened and how that has an organic effect on your sales without doing any work. It’s just in the data itself … data points that can really optimize those models for your data science, basically.
John Koetsier:
That’s so huge because that’s always been a question of mine when I look at MMM, you know, what if my competitor just unleashed this massive marketing campaign and they’re spending tens of millions of dollars a month and you know, that impacts me, that impacts the market and I didn’t know about that.
Or what if something else happens?
I mean, I’ve been thinking about the new Threads app.
When you search ‘Threads’ on the App Store, the official Threads app doesn’t come up first. At least it didn’t for me yesterday when I tried that. There’s a bunch of other apps called Threads and there’s one that says Threads – Not By Instagram. So …
Eran Friedman:
Hehehe
John Koetsier:
… they’ve adjusted that name already.
But you can bet that when something like that happens, some massive event happens, hundreds of millions of people are doing something … that has an impact elsewhere. And if you can’t see that, if you can’t notice that external reality, your MMM is going to be less intelligent than it could be, right?
Eran Friedman:
Yeah, I think you mentioned the Threads launches and as an impact on other specific apps. And you’re totally right.
And you can actually see it like when you look at those trends, it’s really fascinating to see the effects of some of these on the results. Like one of the things we’ve noticed was the effect of COVID on the organic traffic across all the games for the cars. So there was a specific date in March of 2020, basically, if I’m not mistaken, on which there were notifications about the need to have closures in certain countries, not in all of them, but in certain countries around the same time.
And you see from the trends in the benchmarks, there was a significantly large increase of just organic installs across the board with varying degrees per country.
So unless you know to input these per country, basically in your model, you might think that was an effect of your marketing spend, while in truth, that was actually an external factor that affected your results. So just seeing that is really fascinating to see how it affects the outcomes.
John Koetsier:
Absolutely, a rising tide lifts all boats, right?
And you know, you think you’re a hero, you’re a genius, your marketing is amazing, but you might be under the mean, you might be under the average of this rising tide. And so you need to be able to tease out those impacts and then understand the true value of what you’re doing.
You mentioned a couple minutes ago, Robyn, which is an open source framework for MMM. There’s a bunch of other frameworks for this. What framework is Singular using? Are you using Robyn or using some other technology?
And what were your decisions that led to whatever tech you’re using?
Eran Friedman:
So we’re definitely heavily inspired by Robyn and Lightweight. There’s a bunch of these out there.
We’ve ended up, after assessing all of these, we’ve ended up building our own model with some of the similar concepts. We tried to take some of the advantages for each one. But we felt that there is some more customization that we can do based on our knowledge of the industry.
Like the thing about the trends that I’ve just mentioned, which is a bit more of like a binary in Robin, for example, you basically need to decide on specific holidays, dates basically, and it’s kind of like a yes or no kind of factor, where we saw that there’s much more that we can input in terms of specific effects that can be different per country and per segment. So we basically, again, use similar concepts, built our own time-weighted regression model and our own Bayesian model and we’ve been testing both and adding the special flavors or inputs that we wanted to do in our own model.
John Koetsier:
That just underlines how valuable it is to have that other ecosystem data, right? Because let’s say you input the holidays. Well, what’s the impact? How do you know? You guess … you put the holidays in.
But if you see the actual data for the downloads for apps that are in similar verticals or different verticals, then you don’t have to guess that it was the 4th of July or it was something else … “something” happened. Reality changed. The cause is almost … I mean, it matters, but it almost doesn’t matter.
But reality changed.
And then how are you floating in and around that change? Are you outpacing the competition or not? Super, super interesting.
Talk a little bit about clients’ experiences so far. I think you released this in beta a couple months ago. There’s obviously been clients who’ve been working with it and working on it previous to that as well.
What are clients learning? What are they finding?
Eran Friedman:
Yeah, it’s been pretty amazing feedback, honestly. We’ve been really positively surprised by how customers have been reacting to this.
And one of the things that we wanted to make sure is, or the question we wanted to answer is, how can customers feel confident about the results? And really, how, as a customer, if you see the MMM results, and you also see the attribution results based on whatever, SKAN, last click in general, then how would they reconcile between those two and understand what to rely on?
And we can talk a bit more about the long-term vision around hybrid measurement and how we think about this, but the initial reaction was so interesting to see because in many cases, we kind of opened it up for first beta customers and showed them the results.
You know, some of the first feedback we got was, wow, that really aligns with the suspicions that we had internally about some of the differences between the channels. And it’s been very consistent with these. You know, they might have a suspicion that a particular channel is being miscredited by last click based attribution and actually contributes more to the bottom line. But it’s really hard to prove without doing specific incrementality tests or like a general test.
And it’s not always easy in terms of operational changes to really manage.
But then you see the results and actually see the difference between these. And that makes the team much more confident that, wow, there seems to be a significant impact there. Instead of being kind of suspicious and kind of getting into a dilemma, oh, no, what am I going to do? What am I going to rely on? It actually makes them more confident that it’s worth doubling down on testing that thesis, maybe really running that incremental test and seeing if it’s driving better results.
And that’s been pretty consistent feedback across any of the beta customers that tested it early on and seeing new insights and making them either realize some suspicions they had or think about things in a different way and saying, huh, that’s actually very interesting.
We’d never imagined that might have such a dramatic effect.
I think one of the cool things is for some of the customers who’ve actually been playing around with MMM before, they provided feedback that it’s been so much work for them to get the data, maintain the data, maintain the model. And suddenly, the first experience is basically getting those MMM reports out of the box.
That’s been mind blowing for some of these, right?
So just, okay, fully automated, they didn’t need to do anything, just getting those initial insights. That’s been really amazing. They can suddenly dive into the results and from there kind of think about how to reiterate it, but just serving such a critical pain point in just working, implementing MMM, that it had fantastic feedback from these.
John Koetsier:
There’s a lot to unpack in what you just said.
First of all, I think you just said, yeah, there is an easy button. I’m not gonna let you comment on that, so I’m leading with that. 🙂
But secondly, what’s interesting is I remember the conversations, some in Singular, some outside Singular, around MMM a year ago, two years ago, maybe three years ago …
MMM hasn’t ever gone away. It’s been around for a very long time, and there’s been some … push to do it here and there and always the feedback that you’d get around the industry previously, let’s say pre-ATT was, it’s not worth it. We have what we need. We can optimize. Things are working. Everything is fine. Why would you go there?
And what you’re saying is that people who are using this and doing this are getting insights. They’re getting value. They’re confirming suspicions, in other cases, finding new information that’s changing how they’re investing in advertising, but also maybe confirming some of the things they suspected.
The third thing that you said that was really interesting, what I want to dive into right now is there’s been a concern, and we’ve heard it from some really high-end, really amazing, really experienced mobile marketers that I really respect, that when you have multiple measurements, what is true?
It’s the old question, right? When you have two clocks, you don’t know what time it is, because they’re different, right? So you don’t know which one is true. When you have one, you just accept it because it’s the reality, it’s the default reality.
And what you said is that adding this type of measurement hasn’t increased the confusion, hasn’t made things harder, it’s actually made things simpler. Dig into that a little bit harder.
Where is this measurement methodology slotting in for clients?
Eran Friedman:
Yeah, and I think it’s a great question. It really connects with the strategy or approach that we’ve been writing a lot about and talking a lot about with the customers. And I think it all depends on the expectations and the use cases that customers need.
If they’re trying to use MMM to optimize their creatives or sub campaigns, that’s going to be very difficult. And I think some marketers who tried these approaches for very granular optimization, the results were either completely off or like super, super specific to basically mirror the attribution results and it just didn’t have much value. And I think that was maybe some of the different approaches … in our opinion, the wrong approach.
And what we’ve been talking about in Hybrid Measurement is that there’s a lot of datasets out there, including from SKAN and Privacy Sandbox to the user level samples to the incrementality tests and etc … And each one has different use cases. They should provide different insights to answer different questions from each other.
So you might be using the best data sets for optimizing your campaigns or operations day might be actually relying on those digital last click based frameworks.
But if you’re trying to plan your budget for the next quarter or understand the holistic kind of contribution of remarketing efforts to results, last click actually has some disadvantages there, right? It’s very hard for you to actually understand what’s the effect of your influencer campaigns or the virality kind of effects or the cannibalism between those channels.
And that’s where MMM shines a lot.
So when the customer understands the different use cases they should be using for MMM, things make much more sense. They actually see the differences in the results per channel, and they understand where they come from. What’s the cause? So obviously, when you look at last click based, and you understand the concept of last click based and the biases that it has and the advantages that it has, and you compare it with the MMM methodology, for the channel more strategically or at a high level, you would see some differences.
But having that context, suddenly explains things and actually enables you … suddenly, you look at your operational results also in a different light, right? Because you also know that you’re missing some information.
And that enables you to make better decisions, essentially.
So if I try to summarize, it’s all based on the context of the use cases for each data set. And using it with the proper expectations and understanding. That in the end, tying it with the first question that you asked about the easy button, with the technology advancements that we had, it’s definitely easy, I would say, to get the results with everything that we’ve invested.
The challenge is the educational side, the training side, the mindset to understand when to use what, and how to reconcile these. And again, the first feedback that we’ve got really surprises us for the better understanding of how to use each one.
John Koetsier:
Let’s look forward. Let’s look at the future.
You’ve released the product in beta. Customers are using it. You’re getting positive feedback. What’s next? Are there more data points you want to add? Is there additional technology? I mean, everything is always evolving. So in some sense, that’s natural and normal. But where do you see this going? Any big picture things that you want to add?
Eran Friedman:
Yeah, wow, we have a lot of plans for this product. We feel it has tons of potential and there’s many different use cases and types of insights that the MMM reporting can provide.
Some of these are things that we’ve already talked about, the hybrid measurement approach, the ability to compare between the MMM reports and others and helping optimize the reporting side and usability side for the different questions or use cases that you have … so that’s definitely one thing that we’re continuously investing in.
There are also other utilities for MMM that tradition hasn’t been by different measurement methodologies, such as budget allocation optimization. So those are actually areas that the open source frameworks also have like great tools and great visualizations and reporting that we can also add to our product as well. And yeah, beyond that, we basically want to continuously make it more and more accessible and easy to use across any marketer who wants to try it out.
So those are some of the areas that we think about.
John Koetsier:
Super interesting because your first point makes me think like how you report, what that looks like when you come into your dashboard. That changes. That’s changing and evolving, right? And also depending on who you are and what questions you want to answer. And also if you want to compare your last click versus your MMM, what’s that look like? How’s that work? It’s apples and oranges, but it’s apples and oranges from the same orchard.
Eran Friedman:
Mm-hmm.
John Koetsier:
Lots of challenges, super interesting … and great that there is at least some part of an easy button for this notoriously challenging framework. Thank you so much, Eran.
Eran Friedman:
Yeah, thank you for having me again, John. Always fun speaking here.