Take control of your marketing data.

Incrementality testing

Incrementality testing measures causal lift by comparing a test group exposed to marketing with a holdout control group. Use it to quantify real impact (installs, events, revenue) and detect cannibalization—especially in privacy-first measurement.

Understanding incrementality: beyond attribution

Attribution tells you where a conversion was credited, but it does not prove the campaign caused it. Incrementality answers the causal question: what would have happened without the campaign?

A typical experiment uses a test group and a control (holdout) group that are as comparable as possible. The difference in outcomes is the incremental lift.

You can measure lift on installs, in-app events, retention, revenue, or LTV—depending on the decision you want to make.

With incrementality results, teams can calibrate spend across channels, creatives, and geographies based on measured impact—not just credited conversions.

Testing Framework

How incrementality testing works with Adshift data

Use measurement data (attribution, events, and revenue) to define test/control segments and evaluate lift across dimensions like channel, campaign, geography, and time.

Use Adshift's attribution data to segment users into test and control groups based on campaign exposure, source, and timing. The key is creating clean test environments where you can isolate the impact of specific campaigns. Leverage install attribution to identify which users were exposed to your campaigns, use post-install event data to track user behavior, and analyze user journey data to understand the full path from exposure to conversion. Adshift's granular data allows you to segment by media source, campaign ID, creative, placement, and other dimensions, giving you the flexibility to test specific hypotheses about what drives incremental value.

Organic cannibalization is one of the biggest hidden costs in mobile marketing. It occurs when paid campaigns simply replace organic installs rather than generating new users. For example, if a user was browsing your app in the store and would have installed it organically, but then clicked on a paid ad, you're paying for a user you would have gotten for free. By comparing test groups (exposed to campaigns) with control groups (not exposed), you can measure how much of your paid traffic would have occurred naturally. Adshift data helps you identify these patterns by tracking the relationship between paid and organic installs, allowing you to detect when campaigns are cannibalizing organic growth rather than driving incremental value.

Calculate the actual incremental value of your campaigns by comparing performance metrics between test and control groups. Use Adshift data on installs, in-app events, revenue, retention, and LTV to quantify the real lift your campaigns generate beyond what would have happened organically. The lift calculation is straightforward: subtract the control group's performance from the test group's performance. For example, if your test group shows 100 installs and your control group shows 60 installs, your incremental lift is 40 installs. But true incrementality goes deeper - you need to measure incremental revenue, incremental retention, and incremental LTV to understand the full value of your campaigns. Adshift provides all the data needed to calculate these metrics across different time periods and user cohorts.

Run incrementality tests across different channels, campaigns, and partners to understand which sources deliver genuine incremental value. Not all channels are created equal - some may show strong attribution metrics but weak incrementality, meaning they're good at capturing users who would have installed anyway. Use attribution data to segment by media source, campaign, creative, and other dimensions, then measure true lift for each segment. This allows you to identify which channels drive real growth versus which ones simply cannibalize organic traffic. With Adshift data, you can compare incrementality across Facebook, Google, TikTok, and other networks to optimize your media mix and allocate budget to sources that deliver genuine incremental value.

Conduct incrementality analysis using aggregated data that complies with privacy regulations like iOS 14+ SKAdNetwork and Google Privacy Sandbox. The shift toward privacy-first measurement doesn't mean you can't run incrementality tests - it just means you need to work with aggregated data rather than user-level data. Adshift provides aggregated metrics on installs, events, and revenue that can be used for incrementality analysis while maintaining user privacy and regulatory compliance. SKAdNetwork postbacks, for example, provide aggregated campaign performance data that can be used to compare test and control groups at the campaign level, enabling privacy-compliant incrementality testing even in the post-IDFA world.

Adshift data enables you to implement various incrementality testing methodologies. Geographic holdout tests compare regions with campaigns running to regions without campaigns - Adshift's geo-level attribution data makes this possible. Time-based holdout tests compare periods with campaigns to periods without, using Adshift's time-series data. User-level holdouts require setting up tests at the campaign level where campaigns are actually withheld from some users (where privacy regulations allow), then Adshift's attribution data can identify which users were exposed versus unexposed for comparison. Adshift's attribution and event data provides the foundation for all these approaches, allowing you to segment users, track exposure, and measure outcomes across different test designs. The key is using Adshift's comprehensive data to create statistically significant test and control groups that isolate the true impact of your marketing.

Use incrementality results to optimize your campaigns. When you identify campaigns with high incrementality, you can scale them with confidence knowing they're driving genuine growth. When you find campaigns with low or negative incrementality (cannibalization), you can pause them or adjust targeting to focus on truly incremental users. Adshift's data allows you to monitor campaign performance and calculate incrementality metrics as your tests progress, making adjustments as you learn which campaigns, creatives, and targeting strategies drive the best incremental results. This iterative optimization process, powered by Adshift data, helps you maximize ROI by focusing budget on what actually works.

Strategic Value

Why incrementality matters for mobile marketing

Incrementality complements attribution by quantifying causal impact. It helps teams allocate budget, evaluate new channels, and avoid paying for conversions that would happen anyway.

Optimize budget allocation

Detect low or negative lift and reallocate budget toward campaigns that add net-new installs, events, or revenue. This reduces waste from cannibalization and overlap with organic demand.

Prove marketing impact

Share causal results with finance and leadership to align on what marketing actually changes. Lift experiments help teams separate correlation from impact and make budget discussions evidence-based.

Make data-driven decisions

Replace assumptions with experiments. Use lift results to iterate on targeting, creative, and channel mix—and rerun tests to validate improvements.

Gain competitive advantage

As identifier-based measurement becomes harder, lift experiments provide a durable way to validate impact using aggregated and privacy-safe reporting.

Scale with confidence

Scale only after you confirm lift. Repeat tests when you change strategy (new creatives, new audiences, new geos) so growth decisions stay grounded in measured impact.

Frequently asked questions

Incrementality testing measures causal lift by comparing a test group exposed to campaigns with a control (holdout) group that is not. It answers whether your ads generated additional installs, events, or revenue beyond what would have happened organically.

Attribution assigns credit for conversions; incrementality measures whether conversions were caused by the campaign. In practice, attribution can overstate impact when paid activity cannibalizes organic demand, while incrementality quantifies true lift.

Organic cannibalization happens when paid campaigns capture users who would have converted organically. Incrementality testing detects it by measuring lift between test and control groups, helping you avoid paying for growth you would have gotten anyway.

Yes. Incrementality can be analyzed with aggregated, privacy-safe data (e.g., SKAdNetwork and other privacy-first reporting). You compare outcomes between test and control at a campaign/geo/time level instead of relying on user-level identifiers.

Common approaches include geo holdouts, time-based holdouts, and campaign-level holdouts where platforms support withholding exposure. For analysis, you can also use synthetic control methods built on historical baselines to estimate what would have happened without spend.

It depends on expected lift and variability, but you generally need enough volume in both test and control to reach statistical confidence. Many teams run tests for 2–4 weeks and include delayed outcomes (like retention or revenue) if those are decision metrics.

Use lift results to rebalance budget: scale high-incrementality campaigns and reduce or redesign low/negative-lift activity. Apply learnings to targeting, creative, and channel mix, and repeat tests to validate changes.

Ready to measure true marketing impact?

Start running incrementality tests with Adshift data to understand which campaigns drive real incremental value. Make data-driven decisions about budget allocation and campaign optimization.

Request a Demo
Incrementality Testing for Mobile | Measure True Campaign Lift | AdShift