Platform overview
Data quality
Analysis
Hybrid audience
By Use Case
Brand tracking
Consumer profiling
Market analysis
New product development
Multi-market research
Creative testing
Concept testing
Campaign tracking
Competitor analysis
Quant & qual insights
Seasonal research
By Role
Marketing
Insights
Brand
Product
2025 UK Media Consumption Report
2025 US Media Consumption Report
2025 US Spending Trends Report
2025 UK Spending Trends Report
Consumer Research Academy
Survey templates
Help center
Blog
Careers
Sign up to our newsletter
* I agree to receive communications from Attest. Privacy Policy.
You’re now subscribed to our mailing list to receive exciting news, reports, and other updates!
Customer Research Principal
Professor Patrick Barwise explains how to avoid response bias and nonresponse bias in survey research so brands gain more valuable insights.
This may sound obvious, but survey research only has value if the results are valid. However, many brands are basing decisions on unreliable surveys affected by response bias.
Response bias occurs when survey participants provide inaccurate or unrepresentative answers, leading to skewed data that can seriously undermine your research insights. To help you collect more reliable data, we spoke to Professor Patrick Barwise, one of Attest’s investors and advisers, emeritus professor of management and marketing at London Business School, and a Patron of the Market Research Society.
Response bias occurs when survey participants provide inaccurate or unrepresentative answers, leading to skewed data. To collect more reliable insights, follow these 8 best practices:
Response bias is caused by asking non-neutral questions or providing non-neutral response options, so that the answers are influenced in one way or another. Professor Barwise experienced an example of this recently, when asked by Lloyds Bank if he was “delighted” with the service he’d received – clearly leading him toward a positive response.
“We all know that responses to questions can be sensitive to the wording,” explains Professor Barwise. “Some of the recent work on behavioural economics is actually based on that – asking questions that are technically the same but where different ways of framing the question elicit significantly different responses.”
Response bias differs from sampling bias (having an unrepresentative sample) or measurement bias (systematic errors in data collection). Instead, it occurs when the survey design itself influences how people respond, distorting results and leading to poor decision-making based on inaccurate insights.
Understanding the different forms response bias can take is crucial for designing better surveys. Here are the main types to watch out for:
This occurs when only particularly motivated individuals choose to participate in your survey. People with strong opinions – either very positive or very negative – are more likely to respond than those with moderate views.
“In a lot of contexts, especially online, where you’ve often got a response rate of only, say, 2% or so, it could well be that the people who are responding are more interested in the product or the issue than the 98% who are not responding, which could seriously bias the results,” warns Professor Barwise.
This happens when respondents answer in ways that make them look good rather than providing truthful responses. Professor Barwise notes this is particularly problematic for certain topics. When asked if any type of survey is especially vulnerable to response bias, he explains:
“Yes there are. For example, surveys can be an unreliable source of insights into many aspects of major purchases such as car buying. Part of the problem is that the purchase is spread over weeks or even months. But also, you have to be careful about anything in which the response could make the respondent look better or worse to himself or herself and to other people.
“For instance, if someone is paying a premium for a luxury car which is functionally pretty similar to a mass market car and the real reason why they’re buying it is to show other people and themselves that they’re rich and successful, they’re not going to say, ‘I chose the BMW to look rich and successful.’ What they will say is, ‘I chose the BMW because I love the styling’ or ‘I identify with the brand because it’s a driver’s car.’ Anything in which the responses are not neutral in terms of people’s self presentation is problematic.
“If you’ve got different wording in which one version will tend to bias the response one way and another will bias it another way, you can test how sensitive it is to the wording. But sometimes there’s going to be a bias however you word it because any positive response makes respondents look good – maybe by suggesting that they care about the environment or whatever. If that is the case, you need more qual research and piloting to get to the bottom of it. There’s no magic bullet.”
Also known as “yes-saying,” this occurs when respondents tend to agree with statements regardless of their content. This is especially common in surveys with many agree/disagree questions.
Some respondents consistently choose the highest or lowest values on rating scales, regardless of the question content. This can skew your average scores and make it difficult to distinguish between truly different opinions.
While technically a separate issue, non-response bias occurs when people who don’t respond to your survey are systematically different from those who do respond in ways that matter for your research.
“Market researchers are trained to worry a lot about sample size but if there is some kind of systematic difference between the people who respond and the people who don’t respond, the sample size is usually the least of your problems.
“Nonresponse bias is something you can’t directly observe because the only data you’ve got is from people who have responded and not from those who haven’t, so it’s a bit of a nasty.
“One thing you can do is find another way of reaching a sample of the people who didn’t respond and maybe use more intensive methods to survey just a few of them, but with a higher response rate. Then see if their responses were very different from the ones from your main sample. Another thing you can do is see if the late responses are very different from the early responses which suggests there might be something else going on.”
Professor Barwise emphasizes that minimizing bias requires careful attention throughout the survey process: “The top tip is do much more piloting than you think you need. Do pre-piloting then piloting. Do some A/B testing on a small scale. It’s a very nitty gritty, getting your hands dirty kind of process so you just have to put in the grunt work.”
Here are the key strategies to implement:
Don’t rely solely on open calls for feedback through pop-ups or email blasts. Instead, aim for a representative sample of your target population. “If you’ve got more than 50% response then you’d be pretty unlucky if your respondents were very biased,” notes Professor Barwise.
Avoid leading language and emotionally charged phrasing. Professor Barwise recommends extensive testing: “To make sure respondents are interpreting the question in the way you mean them to, you need to do a lot of piloting. You may also want to ask the same question in several ways, ideally to different pilot samples, just to see if you get equivalent responses.”
He suggests a phased approach:
Minimize survey fatigue to reduce drop-off rates and careless answers. Long surveys lead to rushed responses and pattern answering, both forms of response bias.
“Going anonymous can help,” advises Professor Barwise. “For instance with things like 360-degree feedback where people are rating their subordinates, bosses and colleagues, anonymity is crucial. If you’re doing an employee survey it’s really important that they believe their anonymity will be protected as they might think they’re going to get into trouble.”
Use reminders or alternate contact methods to boost response rates and reduce self-selection bias. Professor Barwise suggests: “Find another way of reaching a sample of the people who didn’t respond and maybe use more intensive methods to survey just a few of them, but with a higher response rate. Then see if their responses were very different from the ones from your main sample.”
Test with a small audience to catch unclear or leading questions before launching to your full sample. “If the different ways of asking the question give you the same quantitative answers then it’s pretty robust and it doesn’t matter too much which question you use,” explains Professor Barwise.
When using incentives, protect against people providing junk responses just to get rewards. Professor Barwise recommends several screening methods:
“Your software needs to screen for obvious issues like surveys that should take 20 minutes coming back in 3 minutes. You can also test for internal consistency by including a couple of brands that don’t actually exist in brand awareness questions. If a respondent picks a dummy answer, you might discard that response altogether.”
Avoid surveying only during extreme moments like post-crisis periods or high-celebration events, as these can bias responses toward unusually positive or negative sentiment.
💡Pro tip: Minimising bias starts with how you ask the question. Read our guide to writing better survey questions to learn how smart phrasing leads to more accurate, trustworthy insights.
While you might think you want to influence responses in a particular direction, if you don’t aim for neutral and representative surveys, you’re missing the point of market research. Robust surveys help you really understand your customers and provide more than simple headlines for PR campaigns.
Done properly, surveys offer a true temperature check for your brand and marketplace, providing real, actionable insights that can guide strategy, reveal opportunities, and prevent costly mistakes. As Professor Barwise notes: “I’m a great fan of survey research where you’re measuring things like opinions; how much respondents like something. It’s ideal for NPS, for example.”
Being aware of response bias and non-response bias means you can take more care when designing surveys, while proper testing offers the certainty you need to base decisions on accurate results. Remember: response bias can be minimized by going through proper disciplines with an increasing number of real respondents – and most surveys could be improved this way.
To learn how Attest can help you avoid both types of bias, book a demo today
Jacob has 15+ years’ experience in research, coming from Ipsos, Kantar and more. His goal is to help clients ask the right questions, to get the most impact from their research and to upskill clients in research methodologies.
Tell us what you think of this article by leaving a comment on LinkedIn.
Or share it on:
8 min read
4 min read
Get Attest’s insights on the latest trends trends, fresh event info, consumer research reports, product updates and more, straight to your inbox.
You're now subscribed to our mailing list to receive exciting news, reports, and other updates!