Message Testing Methods for Narrative Change
Introduction
While the narrative change approach holds a lot of promise, the reality of building a campaign approach with an attitude change goal that works for a particular context/audience presents significant challenges and risks. This is why empirically-tested narrative strategies are at the heart of ICPA’s work to support & deliver change in often polarised debates like migration & civic space.
This short resource synopsises the testing methods we have experimented with over a seven-year period and have proven to be effective. It is not an exhaustive taxonomy of the full range of methods available. It rather provides first level practical answers to questions & concerns we regularly hear from NGOs engaged in narrative change campaigns:
- Is testing really feasible/affordable for non-profit orgs?
- How do you do such testing & which methods work best?
- Is testing really worth the effort?
The practical orientation in this guide assists campaigning organisations to make initial decisions on a suitable testing approach that fits their needs, capacity and budget. It worth noting while the focus of this guide is on individual methods, there is an emerging consensus for the need for more longer term/longditudinal monitoring and testing on key public attitudes.
You can also download this resource as a printable PDF
In simple terms, message testing is a process of getting feedback from your intended target audience on a campaign strategy, copy or content to see if they get the expected responses. This valuable feedback allows campaigners to adapt content before launching and during campaigning work. In project design terms, message testing provides essential data mainly for the formative evaluation stage of campaigning and community engagement work and can significantly inform longer-term evaluation & learning. |
Why test?
When the goal of a campaign is changing the attitude of a more sceptical target audience, there is simply more risk you can get it wrong or worse still, even make the situation worse. So, there is a need to invest in understanding what is working (and not) through testing. You can then adjust as you go to build a proven campaign approach that has the best chance of achieving your goals.
The first goal is to figure out what is NOT working. When we target audiences we are not so familiar with, we sometimes build messages and stories based on assumptions that turn out not to resonate with the audience. Further, campaigners should not risk their content/messages triggering the opposite responses to those intended and actually hardening attitudes (triggering the so-called ‘backfire effect’). Testing works especially well in identifying those elements that are not working or even backfiring, so you can make informed decisions on what to remove or adjust based on this feedback. On the more positive side, taking an iterative, test & learn approach will help you adjust and finalise the messages and material that are working and scale up from there to maximise impact. This approach will also build a rigorous evidence foundation to prove your campaign strategy before rollout and such evidence can be a strong basis to mobilise support for your initiative and the narrative change approach for the longer term.
What & when to test?
A wide range of material is commonly tested including pitches, top line narratives/messages, stories & protagonists, slogans/hashtags, visuals/memes, video material and website content. We have found it useful to plan testing at two stages in campaign development:
- Concept level – Once you have identified a target audience/segment of the public and developed your messaging/pitching approach, and even some draft content, it is useful to see if your strategy is working before spending large amounts of money on expensive production costs.
- Content level – This is the more traditional understanding of when to test, i.e. once images, video and campaign content has been developed and you are trying to identify which content and messaging works better.
But if you can’t be so strategic for whatever reasons, its important to remember that “any testing is better than no testing”1.
Overview of 7 common message testing methods
In each of the dropdown tables below, we provide:
- an overview of each testing method
- the data you get from each test
- when’s the best time to use each particular method
- the capacity needed to effectively employ the method
- High & Low-cost options for each method
The tables are an attempt to answer the first level practical questions that NGO partners commonly ask us. We have also provided some basic cost estimates for each, so you have some ballpark idea of the range of costs in Europe in 2022/2023. What is not provided is detailed step by step advice on implementation.
Testing Method 1 - Focus groups
to get rich emotive group responses
|
Headlines |
Details |
Description
|
Small groups from the target audience/segment brought together (online or in person) to respond to your messages and content in a facilitated 1-2 hour discussion. |
Running a minimum of 2 focus groups with each audience segment ensures validity of results. |
Feedback data |
In-depth qualitative group responses to your messages and content from representatives of your target segment/audience. |
A big advantage is to see what emotional response is triggered by your content and the conversations that follow. It’s important to remember that the feedback you are getting is a group response based on their interaction, rather than the individual opinions of the 6-12 participants. |
How/when to use |
More suitable for earlier stage concept-level testing, as the in-depth responses help you understand the audience better and informs how to adapt/respond in the content building phase. |
Can also be used for testing of mock ups/draft content. |
Effort/Capacity |
High effort & some research and analytical skills - Working closely with a research agency or facilitator to select participants, agreeing how the discussion will be moderated, and interpreting results/drawing conclusions from the discussions. |
It’s a lot more work if you also have to source participants. |
Higher cost option €5000 for 2 groups of 8 people (2021) |
Commission from a professional research agency (e.g. IPSOS) to source the target participants, set up and facilitate the focus groups. These can be done face to face and also online, with the opportunity for campaigners to observe the discussion. |
It’s worth spending money because professional research agencies do the following well:
|
Lower Cost option
|
Organisations that work closely with groups of the target audience can quite easily pull together small groups to run more informal focus groups and conduct these a number of times. |
A large faith-based group we support works with many community groups and did focus groups regularly and successfully throughout a campaign. The downside is you are not really able to control the representativeness of the groups, but running many groups can help to balance this downside. |
Testing Method 2 - Opinion polling on messages
to robustly test messages & triangulate results
|
Headlines |
Details |
Description
|
Getting attitude responses to your narratives and campaign messages from a representative sample of the public. |
The typical sample size used for national polls in many countries is 1000, and this is also a standard for our message testing. |
Feedback data |
Mostly quantitative data on levels of agreement with your messaging statements ranging from strongly disagree to strongly agree. |
You can also include demographic targeting and attitude statements to segment results to see responses from specific groups, for example, supporters or the movable middle. |
How/when to use |
This method can be used for both concept and content level message testing, but is more suitable for earlier stage tests of main campaign messages. |
We often use this method at the end of the strategy/concept building stage to see if the campaign messages that worked in focus groups are also working at scale for the target segments. i.e. for quantitative verification of qualitative results from focus groups. |
Effort/Capacity |
Middle level effort & understanding the basics of survey design – higher cost options will provide more support and analysis, but you still need to commission what you want. |
Lower cost options will mean you may need to design the survey and do your own data analysis. |
Higher cost option Approx €35002 |
Commission from a professional research agency or pollster to develop and run a bespoke survey, source a suitable sample of respondents and analyse the results. |
The higher cost option will get higher quality responses from a more targeted public segment. |
Lower Cost option Approx. €900 - €25003 |
The cheapest option is to buy questions on an existing survey called ‘omnibus’ survey (e.g. YouGov Omnibus Daily). You can also build surveys that get quick results working with existing panels of representative population groups (e.g. Fast facts/IPSOS). |
Lower cost options provide lower quality data from less targeted samples and less flexibility to segment the results. They also require more in-house capacity for survey building and data analysis. But if you are looking for confirmation that you are going in the right direction, they work well. |
Testing Method 3 - Randomised Controlled Trials (RCT)
to measure attitude change with confidence
|
Headlines |
Details |
Description
|
Measure how much your campaign material shifts key attitudes with one group (test group) in comparison to a similar group who don’t see your material (control group). |
The sample size for RCTs has varied between 1000 to 4000, depending on how much access your provider has in different country contexts & your budget limitations. |
Feedback data |
Robust, comparative quantitative attitude shift data that will allow you to say, for example, that the material moved public attitude by 3 points in the positive. RCT data is considered the gold standard for such tests. |
In-depth learning about audience responses to your material. Also provides an up to date survey on key attitudes from your target audience in the control group. |
How/when to use |
More suitable for content tests of campaign material, e.g. videos and memetic material. Many partners use it to pick winners, i.e. select the final content for their campaign. |
Also provides very useful data for campaign evaluation. |
Effort/Capacity |
Middle level effort & understanding the basics of survey design needed. For the higher cost option, you will need to work with a research agency to build the survey for the test. |
Lower cost option will need more effort and capacity - you will have to design the test and do the data analysis. |
Higher cost option Approx. €4000 per video or meme tested (2022) |
Commission from a professional research agency (like Swayable) to design, build the survey and run the test, as well as do the data analysis. |
|
Lower Cost option
|
There a few cheaper options for RCT tests, but one partner runs their own focus groups with 2 groups of the target segments for the campaign – one sees the campaign material and one doesn’t. |
This approach requires significant effort and research capacity and does not get the wide population view often needed for such a test. But it can work in a data limited way. |
Testing Method 4 - A/B testing on Social media
to pick options that work better with your audience
|
Headlines |
Details |
Description |
Make final decisions by identifying better performing messages or content options to achieve your objectives, such as reach or engagement. For example, campaigners often test 2 variations of a social media post with the same visual/meme to pick the winner. |
It can help you also to sequence your content, identifying content that works better for reach or awareness building and which drives more engagement or traffic to your website. |
Feedback data
|
Robust quantitative data around key social media performance metrics, e.g. an engagement metric like cost per click. Also in-depth demographic data on the audience reached |
Can also be useful to see the comments and qualitative responses to understand what the material is triggering with your target audience. |
How/when to use |
More suitable for later stage content tests of campaign material, e.g. posts, videos and memetic material. |
To run such tests on FB/Insta, NGOs need to be registered as a social cause. As privacy rules tighten and algorithms change, you need to be up to speed on what’s possible. |
Effort/Capacity |
Low level effort & an understanding of how to build audience profiles and run ads on platforms like Facebook, Instagram. |
Building audience profiles that work takes some time and is worth getting advice on. |
Higher cost option |
You can basically spend as much as you want on ads and tests in the campaign process, but a single basic test is relatively cheap, e.g. around €50 to get some meaningful data. |
|
Lower Cost option €400 for 8-10 tests on FB/Instagram (2019) |
For any campaign, it is worth having a budget to run split or A/B tests before the roll out to make final content decisions. |
You can also test further in the opening part of the campaign by seeing what works though paid ads in what is called a ‘test and learn’ approach. |
Testing Method 5 - A/B testing in email lists
to pick options that work better for your audience & supporters
|
Headlines |
Details |
Description
|
Get feedback on messaging & content options from those who follow your organisation or campaign efforts by signing up to your email list. |
Also useful for testing wide reaching email communications or messaging. |
Feedback data |
Quantitative and qualitative data from your followers. May be a risk for some organisations that you are testing only with supporter groups. More suitable for those with a wide reach to different segments of the public. |
Very good for ruling out what won’t work, when you get negative or (just as bad) no responses. You need to have data to segment your audience. Could get skewed results if feedback comes only from emotional responders. |
How/when to use |
Can be used successfully through the campaign development process from concept to content testing. |
Be careful not to over-burden the list with too many tests! |
Effort/Capacity |
Low level effort and an understanding of running split tests and use of survey tools, like SurveyMonkey or Google Forms. |
Those with big and wide reaching email lists normally have a communications team who regularly communicates with those on the list, so adding the testing dimension is not normally a big burden. |
Higher cost option |
You can buy access to targeted email lists, but partners do not report good results from this approach and there can be privacy issues around how people end up on such lists. |
|
Lower Cost option |
Working with an existing email list to do testing requires mostly manpower and access to a survey tool like SurveyMonkey. |
|
Testing Method 6 - Online Bulletin Boards
to get audience responses over a longer time
|
Headlines |
Details |
Description
|
Get feedback on messaging and content options in a dedicated online forum from an online panel of your target audience over a few days to one week (or even more). |
As well as online bulletin boards, this method can also be referred to as asyncronous focus groups. Often used to watch consumer responses to brands over longer time periods. Allows people to participate when convenient for them. |
Feedback data |
In-depth quantitative and qualitative data on responses to your materials and messaging. It is particularly good for understanding what is triggered by your materials, as you see conversations continue over a longer period. Can get text, audio and even video feedback from participants. |
It also allows you to change your questioning approach as the conversation evolves and even allows participants to ask their own questions. Particularly good method to engage more difficult to reach audiences who may not want to participate in more public fora. |
How/when to use |
For a campaign approach, it is often used early on to see responses to key messaging and draft content options. |
This approach can be used in with a more longitudinal approach to observe attitudes change over longer time periods |
Effort/Capacity |
Quite high level of effort and understanding of the research approach to commission what you want from a reearch provider and in preparing suitable materials. |
To get the best out of it, someone from your team needs to be available to make decisions on how to react as the test is happening. |
Higher cost option €3,500-5,000 for 1 session over 3 days (2022)4 |
You are paying for the use of a secure, dedicated online space to run the test, including mobile access. You also normally pay for help to set up the test, recruit participants and moderate the test. Maybe also some analysis of the results. |
The safety, privacy and security of those participating is very important in this approach, so lower cost options are rare/don’t exist. |
Lower Cost option |
Working with an existing email list to do testing requires mostly man power and access to a survey tool like survey monkey. |
You would have to have an existing group that included a significant proportion of members of your target audience. |
Testing Method 7 - Test & learn through paid ads
to find what works at scale & adapt
|
Headlines |
Details |
Description
|
Try out all messaging, content and copy options in the opening stage of a campaign rollout and see what works, i.e. testing & learning. Then focus the rest of the campaign on the messages, copy and content that have performed better. |
Narrative change campaigners are often trying to reach audiences that are not their followers, so paid ads are a must and also provide the extensive data to evaluate performance. |
Feedback data |
Layered and in-depth quantitative data that shows if you are hitting your campaign targets. Also, qualitative data that allows you to understand the tone of responses and conversations triggered by each piece of content. |
Working with feedback data from multiple platforms and multiple ads, it can useful to have a single interface or dashboard that brings your data together. There are multiple options for this - we have used Keyhole. |
How/when to use |
This test and learn approach is usually done in the opening stage (e.g. first week) of a campaign. |
It’s a good idea to plan a pause week after this test stage to process the data and make choices on how to continue before continuing the ad buy based on the lessons learned. |
Effort/Capacity |
This takes significant investment of time and effort from the campaign team in the opening phase to set up and run the ads and analyse and discuss the feedback to make informed decisions on what content to use and what to drop or adapt. |
In Facebook ad manager, you can set up ads to target reach, engagement or traffic and put all your content/message options for each into a one ad (or “campaign" as Facebook calls it) and this set up will tell you what’s working best for each. |
Higher cost option e.g. €1000 out of €2500 ad budget in a recent campaign |
According to seasoned social media campaigners, it is common to spend 40 to 50% of the ad budget for a campaign in this test and learn phase. |
|
Lower Cost option e.g. 500 Euro for 10 ads |
You can pick the main types of material and messages & do a cheaper test to see what’s working. |
|
If you are interested in how we apply these methods in a campaign evaluation approach, take some time to look at the following:
- A campaign evaluation and testing case study from Germany
- Our overview of how to build an empirically tested narrative change strategy and campaign.
If you are interested in further advice or support on evaluation and testing, drop the Research and Evaluation Hub team an email.
Acknowledgements
We would like to thank the following individuals for their advice through various stages of the development of this resource: Craig Dwyer (ForaChange), Matt MacWilliams (CommsHub), Charlotte Beckett (Why People Do), Brandon Oelofse, Andrew Davies (WeMove Europe) and Dr. James Dennison (Migration Policy Centre & ICPA Associate).
The resource was developed in the framework of ICPA’s RESET project as part of a set of resources to support the strategic communications work of a network of German activists. Support for researching and developing the case study was provided by:
This publication does not express the opinions of BMFSFJ or BAFzA. The authors are solely responsible for the content of the publication.
- 1https://publicinterest.org.uk/project/framing-equality/
- 2Price for 20 data points from 1000 respondents in Germany in 2022 from research agencies like IPSOS & Kantar
- 3Price for 10 to 20 data points from 1 to 2k resondants in Germany in 2022 from research platforms like YouGov Omnibus Daily & IPSOS Fast Facts
- 4Prices sourced from a experience of campaigners and providers like Fieldworkhub & insideHeads