Published On: 24 Nov 2023
Here's a major challenge for modern marketing teams: how do you measure the impact of an advertising campaign beyond the digital ad platform or social media channels where the campaign is taking place?
The problem is, "As digital ad platforms and social media channels do not have access to data outside their environment, they all use the last-touch attribution method to show the effectiveness of an advertising campaign."
The cookieless and privacy-driven digital space makes it even worse. With the need for a digital audience footprint, tracking the real impact of marketing efforts requires going beyond the vanity metrics.
That's where incrementality testing in marketing shows the true impact of marketing campaigns, more so when they are multi-channel.
Incrementality measurement identifies the channels that lead to leaky revenue and the ROI-generating ones that need to be optimized.
Let's see how incrementality tests quantify the impact of various marketing activities without being weighed down by continuous changes in the analytics field.
Incrementality testing is the statistical approach to assessing the impact of a marketing activity or campaign. It shows whether introducing a new campaign had a positive, negative, or null effect and by how much. This quantification helps shape marketing efforts by bringing together profitable campaigns and discarding the ones with negligible results.
Suppose your marketing team is launching a new product and deciding whether to invest in LinkedIn ads and other marketing channels. The incremental testing can help decide if introducing this campaign impacts the desired outcome.
Here are five approaches to marketing incremental testing that would help evaluate the impact.
Holdout testing involves splitting the target audience into two groups: one exposed to the campaign and the other not. You determine the campaign's impact by comparing the outcomes between the two groups.
To build from the previous example, the team randomly selects a portion of their target audience to see the LinkedIn Ads and leaves another portion. Then, they compare the conversion rates between these groups. If the audience exposed shows better impact, the campaign is retained and optimized further.
Matched market testing involves audience groups having the same external environment. This is the same as holdout testing but with similar market or audience conditions. This enables the marketing teams to account for external factors that could influence the results.
For instance, you target two departments within the same organization – marketing and finance. This way, you can account for department-specific variables, such as the role and job function of the employees. So, while they have a similar environment with comparable demographics, different job roles will impact the response to LinkedIn Ads. This allows users to compare results and assess the campaign's impact while controlling for variations.
Multi-variate testing is a type of incrementality testing that involves testing multiple variables simultaneously to assess their combined impact. It's useful for evaluating how different campaign elements, like ad copy and visuals, affect the desired outcome.
For instance, the team runs LinkedIn Ads with variations in ad copy, visuals, and targeting options and analyzes the combined effect.
Geo-testing focuses on specific locations. You can evaluate the regional impact by exposing some geographic areas to the campaign and leaving others out. For instance, the team selects certain regions to run LinkedIn Ads and leaves others. This allows them to gauge whether the location has any considerable effect.
Time-based testing assesses the impact of the campaign over different time periods. It helps understand how seasonality, day of the week, or time of day affect the campaign's performance.
For instance, the team runs LinkedIn Ads at different times, such as weekdays vs. weekends and mornings vs. evenings. Then, they analyze which timeframes give better results.
Zachary Cascalho Cox, a growth expert at Google, suggests three ways brands can think about proving out their media channels:
Read more: The pros and cons of incrementality testing
While you can deploy any of the marketing incrementality measurement approaches discussed above, here are some core concepts you must understand before starting:
For incremental testing, you split the audience into two groups: control and test.
As per the thumb rule, the control group is more than 10% of the test group.
Then, you expose the campaign to the test group and isolate the control group.
You measure the KPIs and analyze their results.
The formula to calculate incrementality is:
For instance, you want to assess the incremental lift of a LinkedIn Ads campaign targeting two departments: marketing and finance. You run the campaign and measure the click-through rate (CTR) as the KPI. After the campaign, the results are as follows:
This shows the positive results of the campaign. Now, using the formula:
Incrementality = ((8 - 3) / 8) 100 = (5 / 8) 100 = 62.5%
This shows that over 60% of the audience clicked because they saw the Ad.
Once you calculate the incrementality of marketing, the next step is to measure its impact. The three metrics to evaluate incrementality testing impact are:
Lift measures the cumulative effect of the campaign on the desired KPI. It shows how much the campaign performs better by quantifying the results.
Case in point, in the previous example, with the CTR of the test group at 8% and of the control group at 3%, the lift would be calculated as:
Lift = (CTR in Test Group - CTR in Control Group) / CTR in Control Group Lift = (8% - 3%) / 3% = 166.67%
The 166.67% lift shows that the campaign had a 166.67% better CTR in the test group over the control group.
ROI depicts the campaign's profitability by comparing the campaigns' costs and profits. It calculates the net difference and assesses whether the campaign generated a positive return.
Suppose the LinkedIn Ads campaign for your new product launch costs $10,000. It resulted in 50 new customers with an average customer value of $200 each.
The total revenue generated is $10,000 (50 customers * $200/customer).
The ROI would be:
ROI = (Revenue - Cost) / Cost ROI = ($10,000 - $10,000) / $10,000 = 0
Zero ROI shows that the campaign broke even. It generated revenue equal to its cost, which is why the campaign was not profitable.
Net incremental revenue is the additional revenue generated by the test group compared to the control group. It accounts for the revenue increase attributable to the campaign.
For instance, if the total revenue generated by the test group was $20,000 and the control group was $10,000, then NIR would be:
NIR = Revenue in Test Group - Revenue in Control Group = $20,000 - $10,000 = $10,000
This $10,000 represents the total actual incremental revenue generated by the campaign. It is the revenue directly attributable to the campaign.
So your next question may be: what's the difference between an incrementality test and attribution? Are they the same or different?
Incrementality Testing focuses on understanding the direct impact of a specific marketing activity by isolating it and measuring the changes in desired outcomes. It involves comparing a test group exposed to the campaign to a control group that is not. This approach answers the fundamental question: Did the marketing campaign cause an increase in sales or conversions? Incrementality testing is particularly valuable in a privacy-first world, as it does not rely on individual user-level data but assesses group-level data to understand the impact of marketing activities.
On the other hand, MTA is designed to understand the contribution of each marketing touchpoint throughout a customer's journey. It considers multiple consumer interactions with a brand across various channels before making a purchase. MTA models, such as linear, time-decay, or position-based, aim to assign proportional credit to these different channels and touchpoints, providing a more comprehensive view of what drives sales and conversions.
Nate Branscome, a GTM and attribution expert, recommends three strategies for CMOs:
While the core focus of incrementality testing is evaluating campaign performance, it can be applied in different scenarios to assess the effectiveness of marketing efforts. So let's look at some of the most common scenarios for applying incrementality testing.
Marketing incrementality evaluates the effectiveness of paid ads by assessing their performance in test and control groups.
For instance, an e-commerce retailer runs two identical campaigns in different regions. In one region, they increase their ad spending, while they keep it the same in the other. By comparing the sales growth between the two regions, they understand if the increased ad spending is driving additional sales.
Businesses can assess if investing in an additional email marketing campaign is worth it or not as incrementality tests segregate the audience into test and control groups and compare the impact.
Suppose an e-commerce website sends out promotional emails with discounts to only one segment of its customer base while excluding another segment. Then, by analyzing the sales data from both groups, they can extract insights to determine if the emails helped in sales.
Incrementality testing helps retailers and e-commerce platforms assess if promotions and discounts add actual value for their customers or just act as a nudge to boost sales.
The same store runs a "Buy One Get One Free" promotion offer for a week and then stops it for another week. By comparing the sales during these two weeks, they can determine if the promotion resulted in any significant increase in sales.
Incrementality testing helps businesses assess the prospect of loyalty programs by helping them segregate between the core reasons for recurring orders.
If your brand has a loyalty program where members earn reward points for every purchase, you can use incrementality testing marketing to compare the spending of loyalty program members to non-members to see if the program is encouraging customers to purchase more often.
Incrementality testing is useful for brands launching new products to reveal if the reason for brand awareness is the new marketing campaigns or organic interest.
Say a phone company introduces a new model and runs marketing campaigns. They compare the sales of the new model with a similar older model (historical data) to find out if the marketing efforts for the new product launch are driving incremental sales.
Now that we better understand how incrementality helps to overcome the challenges of attribution, here are three core reasons why your next campaign must invest in incrementality testing.
Incrementality testing allows you to prove the direct impact of a campaign on desired KPIs. For instance, if an e-commerce company launches a campaign, it can compare clicks to attribute the results. As a result, they can tie back these results to prove the campaign's impact on sales.
Brands run multiple advertising campaigns at the same time across various platforms. Incrementality testing helps distinguish between the campaigns that bring desired outcomes and those that don't.
For example, if a retail brand runs both Google Ads, TikTok Ads, and Facebook Ads, incrementality testing helps to identify the better platform.
In this economy of "do more with less," marketing teams must explain how the money they spend on ads is being used.
Incrementality testing becomes their advocate and helps prove ROI with specific data. For instance, an e-commerce retailer can use marketing incrementality testing to determine the effect of a paid social media advertising campaign on online sales vs organic search.
Overall, incrementality helps you understand which campaigns give momentum to the core success metrics.
You must consider the following factors to choose channels for testing:
Based on these two factors, some of the channels that are relevant for incrementality testing are:
These channels become a good starting point for incrementality testing.
While incrementality offers insights to optimize marketing spend, the top 3 common challenges that can give misleading results are:
Seasonality refers to fluctuations in consumer behavior and data caused due to changes in external factors, such as holidays or weather changes. To address seasonality, one approach is to compare the test results with historical data from the same period in previous years. Alternatively, it is advisable to avoid testing during holidays or other seasonal impacts.
Outliers are data points that deviate from the average.
A case in point: Where an average order value is $200, getting a $4000 order will be an outliner.
On the other hand, overlapping audiences refer to individuals who belong to multiple test and control groups which leads to confusion in data interpretation. To overcome these, use statistical methods like trimming or robust regression to reduce the impact of outliers. For overlapping audiences, carefully segment and target groups or use exclusion lists.
An inadequate testing duration or small segment size can give inconclusive or unreliable results. Relying on shorter tests and small segments will not provide significant data or long-term insights.
To get reliable findings, ensure the testing duration aligns with the expected customer behavior cycle, business cycle, or industry norms.
Additionally, aim for a significant sample size where the control group is at least 10-20% of the test group.
Incrementality testing doesn't rely on third-party cookies or demand individual user data. As a result, it becomes a valuable strategy to address the complexities of privacy-first regulations while assessing the performance and impact of your marketing campaigns. Now that data privacy is at its peak, it rises as an ethical solution to track and analyze user behavior while adhering to regulations.
In A/B testing, the sample audience is split into two or more groups to test different campaign variations. It is used to understand which variation performs better based on predefined metrics like click-through rate.
An incremental test measures the incremental value of a marketing campaign. For example, conversions that resulted from a specific media strategy being tested.
When choosing between incrementality testing, marketing mix modeling (MMM), and multi-touch attribution (MTA), the decision depends on your specific marketing goals and the insights you seek.
Each method has its strengths and is suited to different aspects of marketing analysis. Incrementality testing is vital for causal insights into the effectiveness of specific campaigns, MMM is best for strategic planning and budget allocation, and MTA excels in providing detailed insights into the customer journey and touchpoint effectiveness.
Interviews, tips, guides, industry best practices, and news.
Unlock the true impact of your marketing efforts with incrementality measurement. Dive deep into methodologies like Geo ...
Understand the differences between Incrementality and A/B Testing. Choose wisely to refine your strategies and maximize ...
Forecast accurately with no-code ML & AI model setup that provides comprehensive predictive insights
Stay in the know with always-on measurements providing real-time channel performance