3 incrementality testing mistakes — and how to avoid them


Everyone is talking about incrementality these days, and that’s a good thing. It means more teams are finally asking the right question: Are our efforts actually moving the business, or are we just getting better at taking credit?

The flip side is that many of the same mistakes persist, now appearing more frequently and with larger budgets attached. As incrementality becomes a cornerstone of performance marketing measurement, here are three to avoid.

Mistake 1: Not deeply understanding what you’re trying to learn

Problems often start when a team says, “We want to test Meta” or “We’re doing a PMax lift study,” and that is where the thinking stops. There is no clear articulation of what decision the test is meant to inform or what success would look like.

Then the results come back. An iCPA or iROAS does not match what attribution showed. A confidence interval presents a range of possible outcomes rather than a single number. The reaction is surprise. This typically happens when teams move too quickly without thinking through why they are running the test.

Before talking about running a test, teams should be able to answer a few simple questions in plain language:

  • What exactly are we trying to learn?
  • What is leading to this line of inquiry?
  • What will change if we learn X, and what will change if we learn Y?

Having these answers makes it easier to interpret results once they are in. In a perfect world, there is a decision tree in place, so there is no debate about what to do next. The action is already defined based on where the result lands.

Dig deeper: 4 steps to kickstart incrementality without overcomplicating it

Mistake 2: Assuming insight gathering itself is valuable 

The second mistake is treating incrementality as an academic exercise. A test runs. A deck gets produced. Someone says:

  • “This drove X% lift.”
  • “This campaign is X% incremental.”
  • “This is more incremental than that.”

Everyone nods. A few charts get pasted into a larger meeting deck. Then nothing changes. In these cases, the issue lies in both inaction and imprecise language surrounding incrementality testing. This may sound nitpicky, but it matters.

  • “X% lift” relative to what?
  • Is that lift on revenue, conversions, new customers or profit?
  • Does it translate into a good or bad iCPA?
  • Does it clear a contribution margin hurdle?

For a small brand, a 1% lift might be noise and an easy decision to turn something off. For a large brand, a 1% lift might be worth millions. In both cases, relating lift to spend is critical.

Dig deeper: What your attribution model isn’t telling you

If the output of a test is reported as “Meta drove 14% lift” and is not translated into iROAS or iCPA, incremental contribution margin and, most importantly, what will change as a result, then the test did not accomplish much.

This is also where marketing and finance alignment can either help or hurt. Showing finance a slide that says “X% lift” and expecting it to resonate is unlikely to work.

A more effective approach is a very literal description of results:

  • “Without marketing spend in Meta, here is what we would have expected to happen. With Meta on, here is what actually happened. The difference between those two outcomes is what we credit to Meta. That translates to an incremental CPA of X and an incremental ROAS of Y. Given our margins, spending Z with these tactics added X dollars of contribution profit.”

This framing may feel dry and runs counter to the preference for a single bold number per slide. However, it does a few essential things:

  • It clarifies what lift actually represents.
  • It highlights cases where a channel looks strong in attribution but weak incrementally.
  • It gives finance something concrete to use in a model or P&L, rather than a vague incrementality percentage.

If you cannot answer “So what?” in one or two simple sentences after a test, you are still collecting interesting facts about your media, not using measurement as a clear growth lever.

Mistake 3: Forgetting to optimize campaigns

Incrementality testing often gets treated as a pass-or-fail grade for a tactic, rather than as a feedback loop. The pattern usually looks like this:

  • A team runs an incrementality test on a tactic such as ASC, PMax, YouTube or CTV.
  • The iCPA or iROAS comes back worse than expected or worse than attribution.
  • The conclusion is either “this doesn’t work” or “the test must be wrong.”

What gets missed is that optimization is still the job. Moving from attribution to incrementality as the foundation of a measurement framework does not change that. If anything, it makes optimization more honest.

Consider a common setup. A team is running PMax on Google and Advantage+ Shopping on Meta. Attribution shows both performing exceptionally well. An incrementality test yields a higher iCPA and lower iROAS than expected.

That result does not mean those products can never be profitable. It means they are not profitable in their current state. The next step is optimization. That might include removing branded search or obvious bottom-of-funnel capture from PMax, or lowering the existing customer share in Advantage+ and pushing more budget toward actual prospecting.

Dig deeper: Marketing results don’t add. They multiply and synergize.

This is where the moment-in-time nature of incrementality becomes an advantage rather than a limitation. 

  • MMM will continue to treat PMax or ASC as the same campaign it has a year or more of history on, even if a meaningful structural change has just been made. 
  • Attribution will often appear worse because those campaigns are capturing fewer last-click conversions that would have occurred anyway. 
  • Incrementality testing is the only method that isolates the impact of the new setup versus the old one, without incorporating months of irrelevant historical data.

Teams can then compare the old iCPA and iROAS with the new iCPA and iROAS, as well as the old attribution performance with the new attribution performance. From there, internal incrementality assumptions can be updated accordingly.

This approach requires more work. Measurement roadmaps cannot be a linear series of one-time tests. Tests may need to be rerun after campaign changes, and incrementality factors will evolve.

If teams are not prepared to optimize based on incrementality results, they are effectively paying to learn what campaigns are delivering in the current moment and then choosing not to act on it. To avoid that trap:

  • Treat each test as a step in an optimization loop, not a verdict.
  • Build in time and budget to rerun tests after meaningful campaign changes.
  • Expect attribution metrics to look worse at times and be prepared to explain that with incrementality results.
  • Include a decision tree in the test brief so post-test actions are clear.

Handled this way, that limitation becomes a strength. Teams get focused feedback on what is working right now and a clear path to making media more incremental over time.

Bringing it all together

If incrementality is going to be more than a buzzword inside an organization, these three mistakes need to be addressed:

  • Do not launch tests without a clear learning objective and a decision tree in place.
  • Do not stop at insights. Tie every result to iCPA or iROAS, contribution margin and concrete next steps.
  • Do not treat tests as verdicts. Use them as inputs into an ongoing optimization loop.

Handled correctly, incrementality stops being a side project and becomes a grounding force that helps shift marketing from a cost center to a profit center.

Fuel up with free marketing insights.

Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. MarTech is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *