Every week, marketing teams across the country look at their dashboards and make decisions that cost them real money. They see a campaign with a ROAS of 4.0 and another with a ROAS of 1.5, and they make the obvious choice: cut the loser, scale the winner.
Except its not that simple. The number on the screen might look precise, but its built on a foundation of assumptions that rarely hold up under scrutiny. What looks like a winning campaign might be hemorrhaging money. What looks like a loser might be quietly driving most of your revenue.
This is not a failure of intelligence. Its a failure of attribution. And until you understand how broken your measurement system really is, you will keep making the same costly mistakes.
What ROAS Actually Measures (And What It Doesnt)
Return on Ad Spend seems straightforward: revenue divided by ad spend. If you spent $1,000 and made $4,000, your ROAS is 4.0. Simple. Clean. Misleading.
The problem is not the math. The problem is deciding what counts as revenue from that ad spend. Here is where everything falls apart.
When a customer clicks your Facebook ad, visits your site, and later buys through a direct traffic visit, how do you credit that sale? Most platforms use last-click attribution: the final touchpoint gets 100% of the credit. So if they clicked your Facebook ad last Tuesday, browsed directly three times this week, and finally converted, the direct traffic gets the sale. Facebook gets nothing.
Your Facebook campaign looked like it performed poorly. You cut the budget. But it was actually doing the hard work of finding interested customers, only to lose the credit to a direct visit that would have happened anyway.
This happens across every channel. Google Ads loses to organic search. Email loses to direct traffic. Display advertising almost always looks terrible under last-click, precisely because its job is to build awareness, not close.
The result is a system that systematically undervalues the top of your funnel and overvalues the bottom. And when you optimize your budget based on this distorted view, you do exactly what the math encourages: you starve the channels that introduce new customers and overfund the channels that were going to get credit anyway.
The Multi-Touch Mirage
If last-click is broken, multi-touch attribution should fix it, right? Not exactly.
Multi-touch models distribute credit across multiple touchpoints. A customer might see a display ad, click a Facebook promotion, watch a YouTube brand video, and finally convert through a branded search. Under a multi-touch model, each of these gets some portion of the credit.
The problem is that multi-touch models still rely on the same underlying data, and that data is incomplete. They cannot see what happened outside your tracked channels. They cannot account for offline influences like word-of-mouth, trade show conversations, or competitor ads that actually drove someone to search for your brand.
More importantly, every multi-touch model makes assumptions about how credit should be distributed. Linear models give equal credit to every touch. Time-decay models favor recent touches. Position-based models give extra weight to first and last. Data-driven models use statistical algorithms to estimate contribution.
None of these are objectively right. They are all guesses dressed up in mathematical clothing. And depending on which model you choose, the same campaign can look like a star performer or a total loser. This is not a foundation for confident decision-making.
The Cookie Crisis: When Measurement Breaks Completely
If attribution was always imperfect, at least it worked reasonably well for tracked digital channels. Then privacy regulations and browser changes started dismantling the tracking infrastructure.
Apples App Tracking Transparency framework alone caused massive disruption to Facebook and Instagrams ability to measure iOS conversions. Users opted out of tracking at unprecedented rates, leaving platforms to infer conversions using statistical modeling rather than actual observed behavior.
Googles eventual removal of third-party cookies from Chrome will accelerate this further. The tracking that made digital attribution possible is being deliberately dismantled, and the numbers on your dashboard will only become less reliable.
What does this mean in practice? The ROAS you see today is already less accurate than it was two years ago. It will continue to degrade. The gap between reported performance and actual performance will grow. And teams that continue to trust these numbers uncritically will keep making increasingly expensive mistakes.
The Real Cost: What You Cannot See
All of this leads to a hidden tax on your marketing budget. You cannot see it in any dashboard. There is no line item that says wasted spend due to misattribution. But it is there, every single day, in every budget decision you make.
When you cut a channel because its ROAS looked low, you may be eliminating a valuable top-of-funnel source. When you scale a channel because its ROAS looked high, you may be pouring money into a channel that was getting credit for sales it did not create.
Over months and years, this compounds. Teams optimize toward phantom performance. Budgets drift away from channels that actually drive new customer acquisition and concentrate in channels that merely ride existing demand. Growth slows. Customer acquisition costs rise. And the explanation always sounds reasonable: the data said so.
But the data was lying. Not intentionally. Not maliciously. Just incompletely. And the cost of believing it is real.
Beyond Attribution: What Actually Works
If traditional attribution is broken, what should you do instead? The answer lies in a different approach entirely: modeling.
Marketing Mix Modeling, particularly in its Bayesian form, takes a macro view of your marketing spend. Rather than trying to assign credit to individual touchpoints, it looks at how your overall marketing investment correlates with business outcomes over time. It accounts for things that touchpoint-level attribution cannot: seasonality, pricing changes, competitive dynamics, product launches, and external economic factors.
A well-built MMM does not tell you which individual campaign performed best on a given day. Instead, it tells you how each channel contributes to your total revenue on an aggregate basis, with quantified uncertainty. It can tell you that paid social, despite its low last-click ROAS, actually drives 25% of your incremental revenue when you account for its role in the customer journey.
This is not a magic solution. Building a good MMM requires data, expertise, and ongoing maintenance. But it is dramatically more reliable than any attribution model for making high-stakes budget allocation decisions.
What This Means for Your Business
If you are running more than a few thousand dollars per month in advertising, the odds are high that your reported ROAS does not reflect reality. The exact number is wrong. Some channels look better than they are. Some look worse. And you cannot know from the dashboard which is which.
This does not mean you should stop measuring. It means you should stop trusting single numbers. Start looking at trends over time, not daily fluctuations. Compare outcomes across different attribution models. Use MMM to validate what you think you know. And when you cut a channel, do so with the humility that comes from knowing the data is incomplete.
The real cost of misattributed ad spend is not the money you spent on the wrong channel last month. It is the continued misallocation of budget going forward, based on false confidence in numbers that were never reliable in the first place.
See through the illusion. Question the dashboard. And build your marketing strategy on a foundation that matches the complexity of how customers actually buy.
Leave a Reply