How to run creative testing: best practices

Shapes lining up with holes, symbolizing the creative testing process

You’ve worked hard on your ad/website/packaging (delete as applicable). 

While good creative testing is how you guarantee its success, bad creative testing can guarantee failure. Inaccurate insights can send you in the wrong direction—and no one likes having to scrap a project and start from scratch.

We’re here to make sure your creative testing research is thorough, useful and efficient. Here’s our essential guide to running good creative testing research. 

The two methods of creative testing: sequential and monadic

There are two main methods for creative testing. 

There’s monadic testing, where you divide your respondents into the number of creatives you want to test, then ask each group to review one creative idea without them seeing any of the other ideas. This usually stops people’s answers being biased by their views on any of the other ideas they see in your survey. 

Then there’s sequential monadic testing, where you can test multiple creatives against each other, one after the other, in a single survey. You can ask the audience to compare and contrast the creatives and ask which one they prefer. 

What’s monadic testing? And how can you use it?

Check our deep dive into monadic testing to make sure you have the creative and concept testing insights you need to make the right decisions for your brand.

Learn more

What are the benefits of creative testing?

Whichever way you choose to do creative testing, you’ll find there are some big benefits to it. You’ll improve the effectiveness of your ads, streamline creative processes, and more.

Creative testing (and acting on the results) ensures that every element of your advertising, from visuals to messaging, resonates deeply with your target audience. It significantly reduces the risk of costly missteps and enhances the potential for campaign success. 

You can’t go on guesswork. There are a lot of numbers flying around about how many ads we see a day, some going up into the thousands. While that has been debunked (the number might even be closer to ‘just’ 100 a day), it’s absolutely true we see ads all the time, everywhere. And it’s becoming more challenging for brands to come up with something people won’t skip or walk past. One research showed that 65.9 percent of app users stated that they always skip in-app video ads – and numbers across other platforms are similar.

This makes it all the more important to put out creatives that stand out and resonate with your audience. 

Let’s take a closer look at the benefits of both monadic and sequential testing. 

The benefits of monadic testing include:

  • Focused feedback: If respondents only have to evaluate one creative concept (instead of lots), you’ll probably get more thoughtful feedback. You learn more about the impact of each individual creative, which should give you more actionable feedback. 
  • Reduced bias: If participants look at a single creative, they’re not going to be influenced by other creative works. This makes sure that their responses are more likely to be unbiased and genuine. 
  • Dive deeper into the details: Monadic testing allows for a deeper dive into each concept and its specific elements. If you’re working on complex or nuances creatives, this could be incredibly valuable.
  • Measure performance more accurately: If the stakes are high, the margin of error should be low. With this type of creative testing, you can often more accurately gauge the performance of individual creatives. 

Leaning toward sequential testing? Here are the benefits:

  • Direct comparisons: Letting respondents evaluate multiple creatives in succession makes it easier to understand which one stands out the most. 
  • Broader insights: The results you get from sequential testing might be more reflective of what happens in real-world scenarios. This is because they’re seeing multiple creatives, and in the real world people see loads of ads and creative assets every day.
  • More cost and time-efficient: Testing multiple creatives in one go is simply quicker and often more cost-efficient. Perfect for those tight deadlines.
  • Flexibility in your research design: Testing a range of creatives against each other can be particularly useful in the early stages, when you’re still not sold on which creative direction to take.

How to run creative testing research

What types of questions should you ask in a creative testing survey?

Ideally, you should ideally ask a mixture of both quantitative and qualitative questions. But if you ask too many qualitative questions then your respondents may get tired of answering, giving you low-quality data, and there’ll be a lot of answers to read through, taking a lot longer to analyze.

Qualitative questions are generally best to use in the early, exploratory stages of creative research—getting a vibe for what type of creative you should be publishing. While quantitative questions will give you a comprehensive amount of data for finding out which creative ideas work—because the numbers are bigger, you can have a bit more confidence in their authority. 

Here are some example questions you could ask to get you started. We’ve also got more creative testing example questions to inspire your research. If you’re pressed for time, we have a creative testing survey template with all the best practices built-in.

Sequential testing survey questions

Quantitative

  • Having seen both designs, which one do you prefer?
  • Which ad makes you want to buy the product more?
  • Which ad gives you a better feeling about the brand?

Qualitative

  • Why did you choose that one? Please provide as many details as possible, we really appreciate any feedback.
  • Why don’t you like X ad?
  • How could X ad be better?

Monadic testing survey questions

Quantitative:

  • Does this ad make you want to buy the product?
  • Does this ad make you feel good about the brand?
  • What is your favorite thing about the ad?
  • In your opinion how clear was it that this advert was for [brand]?

Qualitative

  • How does the ad make you feel?
  • What don’t you like about the ad?
  • How could the ad be improved?
  • Please can you explain why you have given this answer?
  • ​​What do you think was the main message of this advert?

How to reduce bias in your creative testing: 4 best practices

You want your respondents to give you insights that are completely unbiased. Biased responses = decisions based on unreliable insights. 

To avoid bias, you need to be clear about the things you want respondents to focus on and remove other distracting elements. 

Here’s how. 

1. Remove logos and familiar branding

If you were testing different types of packaging, you could remove brand logos to make sure any preconceptions the respondent has about the brand don’t influence their responses.

It’s also worth removing any colors that might remind people of your brand. Who doesn’t recognise the Coca Cola red or McDonald’s yellow?? (You shouldn’t need to worry about this if your brand isn’t as huge as these – only think about this if your brand is synonymous with your colors.)

2. Even remove products

You might also want to remove the product name, and perhaps even the product itself, to get people’s view on the packaging alone.

Imagine you’re testing packaging for a snack or soft drink:respondents could be swayed if they knew it was their favorite (or least favorite) flavor. 

3. Test creative you know doesn’t work

This might sound counterintuitive, but you’ll thank us later!

By testing old or ineffective assets, you give yourself a benchmark to compare your new assets against. So, if you get similar results as the old asset, it’s not something you can justifiably launch. But if your new research shows improvements in key areas, that’s a green light!

Pro tip 💡

“Don’t tell anyone, but it’s totally fine to test your competitors’ assets. In fact, we advise brands to make it standard practice.

By doing this, you have another benchmark to compare your assets with. But you also get an understanding of your competitors’ creative successes or failures.” 
Sam Killip
VP Customer Success

4. Test an exaggerated version of your creative

If there are certain elements of your assets you want opinions on, exaggerate or highlight them.

For example, if you’re testing packaging and want to know what people think about your imagery choices, you could make the imagery more prominent on the version you test. Or you could remove other distracting elements so that people focus more on the elements you want them to see. 

Best practices for setting your creative testing audience

Here are some pointers on how to reach the right people with your creative testing.

Work out who’s in your target audience

You should make sure your creative testing audience represents your target audience. 

Pro tip 💡

“Do you have doubts about who you should be running creative testing to? You’ll need to run some consumer profiling to get a proper sense of who’s in your target audience.” 
Elliot Barnard
Customer Research Lead

Widen your audience, and narrow it down later

You should slightly expand the parameters of your target audience for creative testing. This will allow you to compare the reaction of your non-target audience with that of your target audience. If the reactions are broadly similar, then you might find it worthwhile to expand your main target audience in the future.

For example, if your target audience is 30- to 40-year-olds, then conduct your test on 25- to 45-year-olds. 

Set audience quotas

Put audience quotas in place, especially if you are conducting monadic testing. Audience quotas are where you pre-set certain amounts of respondents from specific groups to make sure you have a good spread of responses. 

If you don’t set audience quotas and only use nationally representative samples, you could end up with different audiences reviewing different creatives, and this could mean your results are inaccurate. Putting quotas in place means you can be sure that similar, target customers audiences are reviewing your creatives, rather than a bunch of irrelevant people.

Randomize!

Make sure you randomize your creative assets (if you’re testing more than one at a time). If you don’t, then order bias might mean you end up with skewed results. 

If you randomize, then you’ll get reliable insight into how effective each of your assets is.

Test with your own audience too

While you would typically carry out creative testing on people who fit your target audience—AKA prospective customers—it can also be a good idea to carry out creative testing on your loyal customers as well. 

You can safely assume that your existing customers know your brand better than most people. That familiarity can mean that existing customers give you additional insights you hadn’t thought of before. For example they could tell you whether the creative you show them matches the brand purpose and image that they already have an understanding of.

Know when to move on

Although it’s useful to keep testing and iterating your creative ideas throughout the process of their creation (and even after they’ve been published, to a certain extent), you need to know when to stop! 

If your new creative ideas are out-performing previous creatives that you tested in the past, then there’s no need to keep tweaking and testing. If the reactions are positive overall, then you’re probably ready to launch your creative ideas to the world for real.

And if you focus too much on tweaking existing ideas, that might not give you the space to test the game-changing, off-the-wall ideas that might take your brand to the next level.

Before you launch your creative testing project, make sure you know the key mistakes you should avoid. 

What do you do after running creative testing?

You’re not done yet (sorry!).

In fact, this is where the real effectiveness of your creative testing study is determined. You need to be willing and ready to act on the results that come out of your creative testing. Let’s look at how you can navigate this part of the process. 

  1. Start by deciphering the data: With Attest, you’ll immediately get an overview of the data and analytics, but it’s up to you to start interpreting this and looking at every piece of feedback. Are there overarching themes? Those will require most of your attention, but pay attention to the details as well.
  2. Identify standouts and underperformers: If you compared different concepts in your testing process, look at which ones have been voted best and worst, and dive into the why. These key insights are indicators of what works and what needs rethinking. This is where you need to look at the data, and not your own emotion: trust what your target audience is telling you. 
  3. Contextualize with quantitative and qualitative feedback: If you did it right, you will not only have asked what works and what doesn’t, but also why. This data will help you determine what elements to change. 
  4. Share insights with your team: Start doing this early on. They might come up with their own interpretations or ideas based on the data. Don’t be afraid to present data to creatives!
  5. Refine and iterate: Rework your concepts based on the feedback. Keep checking in if your alterations are actually in line with the feedback you got. 
  6. Document and learn from the process! This is an often overlooked step. But don’t just look at the survey results and the new concepts – also keep account of how the whole process is going. This can be of great value next time you work on creative concepts. 
  7. Launch your refined creatives into the world 🚦 They’re ready. Just keep an eye on how they’re performing!

The Consumer Research Academy is brought to you by the Customer Research Team—our in-house research experts. Any research questions? Email or chat with the team.

We have detected that you are using ad blocker software and this may cause dysfunction. To have a better user experience, please turn it off and refresh this page.