How to Minimise Survey Bias

Professor Patrick Barwise explains how to avoid response bias and nonresponse bias in survey research so brands gain more valuable insights.

This may sound obvious, but survey research only has value if the results are valid. However, many brands are basing decisions on unreliable surveys.

In this post, we look at how to minimise survey bias: response bias and nonresponse bias, two ways in which surveys can be skewed, on top of the familiar problem of the sample being too small or unrepresentative. Most market researchers know how to minimise sampling problems, but many pay too little attention to these other sources of bias, which can be even more important.

Minimise Survey bias: Professor Patrick Barwise

Response bias is caused by asking non neutral questions or providing non neutral response options, so that the answers are influenced in one way or another. I experienced an example of this only today, when asked by Lloyds Bank if I was “delighted” with the service I’d received

Nonresponse bias, in contrast, occurs when the responses are valid for those who respond, but those who don’t respond are different from these respondents in ways that matter for the purposes of the research.

To explain these two biases in more detail – and how market researchers can minimise them – we spoke to Professor Patrick Barwise,one of Attest’s investors and advisers.

Patrick is emeritus professor of management and marketing at London Business School, a Patron of the Market Research Society, and author of successful books on marketing strategy, execution and, most recently, leadership.

How can the wording of questions cause response bias?

“We all know that responses to questions can be sensitive to the wording. Some of the recent work on behavioural economics is actually based on that – asking questions that are technically the same (like the respondent’s estimate of the percentage of people who do or don’t have a particular attribute) but where different ways of framing the question elicit significantly different responses.

“This is why people like me are fanatical about piloting. To make sure respondents are interpreting the question in the way you mean them to, you need to do a lot of piloting. You may also want to ask the same question in several ways, ideally to different pilot samples, just to see if you get equivalent responses.

“In academic research, to get published, you may have to ask each respondent the same question using five different wordings. You then check that the different wordings are all measuring the same construct by looking at how closely the responses correlate, using a statistic rather impressively called Cronbach’s Alpha. This is technically measuring ‘reliability’ (are they all measuring the same construct?) rather than validity (are they measuring the right construct?). All this is enormously tedious for the respondents being asked the same question several times over, but it enables the academic to put the Cronbach’s alpha into their paper in order to get the referees accept it.”

That sounds very technical. What methods can non-academics use to avoid response bias?

“In the real world, you can tackle response bias in two phases; first, at the qual – pre pilot – stage and then at the quant stage, mainly using A/B testing. By pre pilot, I mean literally going to just four friendly people, perhaps at the client, giving them the question and talking to them each individually about their interpretation of it and how they might respond to it.

“Then you do some proper semistructured qual research: you get people’s responses and then get them to expand on why they gave those responses, still on a small scale; perhaps just a dozen or so people, individually so they don’t influence each other. At that point you should be ready to do A/B testing with different samples using the most promising different wordings.

“A/B testing involves asking differently worded questions to different subsamples, rather than asking the whole sample a question worded in several ways. The number of respondents and the number of rounds you have to do will depend on both the scale of the project and the extent to which the responses hang together.

“If the different ways of asking the question give you the same quantitative answers then it’s pretty robust and it doesn’t matter too much which question you use. You’d probably go for the one that gave you the highest response rate.

“Response bias can be minimised by going through those sort of disciplines properly with an increasing number of real respondents. Most surveys could be improved in this way.”

Are there any types of survey that are especially vulnerable to response bias?

“Yes there are. For example, surveys can be an unreliable source of insights into many aspects of major purchases such as car buying. Part of the problem is that the purchase is spread over weeks or even months. But also, you have to be careful about anything in which the response could make the respondent look better or worse to himself or herself and to other people.

“For instance, if someone is paying a premium for a luxury car which is functionally pretty similar to a mass market car and the real reason why they’re buying it is to show other people and themselves that they’re rich and successful, they’re not going to say, ‘I chose the BMW to look rich and successful.’ What they will say is, ‘I chose the BMW because I love the styling’ or ‘I identify with the brand because it’s a driver’s car.’ Anything in which the responses are not neutral in terms of people’s self presentation is problematic.

“If you’ve got different wording in which one version will tend to bias the response one way and another will bias it another way, you can test how sensitive it is to the wording. But sometimes there’s going to be a bias however you word it because any positive response makes respondents look good – maybe by suggesting that they care about the environment or whatever. If that is the case, you need more qual research and piloting to get to the bottom of it. There’s no magic bullet.”

Let’s say you want to find out about a respondent’s alcohol consumption, for example, how can you encourage truthful answers?

“Booze is one classic thing in which most people are in denial and surveying them is not a reliable way to get this information. Another one is people’s use of new technology. I was involved in a video ethnographic study about how much people were using PVRs (personal video recorders). It was actually a precursor to Gogglebox and Channel 4 was one of the sponsors.

“What we found was that people’s self perception was that they hardly ever watched commercials any more. They genuinely seem to have believed that they always watched off the PVR and fast forwarded through the commercials. However, by filming them over a week using their TVs, we found in reality PVR usage was only around 15-20% and they still saw a lot of commercials.

“I’m a great fan of survey research where you’re measuring things like opinions; how much respondents like something. It’s ideal for NPS, for example. However, if a survey is asking something which is either hard for someone to answer or their answers make them look better or worse, you may need to look for another method.”

Can allowing respondents to answer anonymously help avoid response bias?

“Yes, going anonymous can help. For instance with things like 360-degree feedback where people are rating their subordinates, bosses and colleagues, anonymity is crucial.

“If you’re doing an employee survey it’s really important that they believe their anonymity will be protected as they might think they’re going to get into trouble. It’s best to get an outside supplier to do that as they’re more likely to be trusted.”

Avoiding survey bias is not just about the questions you ask, it’s also about who’s answering.   Why do brands need to be aware of nonresponse bias?

“Market researchers are trained to worry a lot about sample size but if there is some kind of systematic difference between the people who respond and the people who don’t respond, the sample size is usually the least of your problems.

“Nonresponse bias is something you can’t directly observe because the only data you’ve got is from people who have responded and not from those who haven’t, so it’s a bit of a nasty.

“One thing you can do is find another way of reaching a sample of the people who didn’t respond and maybe use more intensive methods to survey just a few of them, but with a higher response rate. Then see if their responses were very different from the ones from your main sample. Another thing you can do is see if the late responses are very different from the early responses which suggests there might be something else going on.”

What kind of response rate do you need to aim for to avoid nonresponse bias?

“If you’ve got more than 50% response then you’d be pretty unlucky if your respondents were very biased. But in a lot of contexts, especially online, where you’ve often got a response rate of only, say, 2% or so, it could well be that the people who are responding are more interested in the product or the issue than the 98% who are not responding, which could seriously bias the results.

“There is a tendency in market research to think as long as the sample is big enough the results are valid but this is one of the big reasons why they might not be.”

Is there a risk of nonresponse bias that comes from the type of people most willing to complete surveys? i.e. professional panelists

“If you’re incentivising people it’s a particular risk. With online surveys one of the things you have to protect against is people putting in junk just to get the reward. It’s obvious if a survey that’s supposed to take 20 minutes comes back in 3 minutes that something is wrong. Your software needs to screen for that.

“Clearly some people are wise to that so they fill the survey in in two minutes and then wait 15 minutes to send it, so if you can measure keystrokes and delays between keystrokes that’s even better.

“You can also test for internal consistency. You can ask about things in which some of the responses are nonsense. For instance, if you’re asking a prompted brand awareness question (‘Which of the following brands have you ever bought or are you aware of?) you can include a couple of brands that don’t actually exist.

“If a respondent picks a dummy answer, you might discard that response altogether or say two responses like that would disqualify them. Sometimes people might think they’ve heard of a brand but they’re mixing it up with something else but it would be a flashing light that the response was less reliable.”

Are there any other alarm bells that bias may have occurred?

“Extreme values you’d look at. Claimed behaviours which seem unlikely, you’d look out for that too. Those things are easier to spot at the sample level than an individual level. If it looks as if you’re getting responses that are way out of line with previous surveys or other people’s research, especially if the other research is based on harder data, then you’d say, ‘We’ve got a problem here.’ That could be either a response bias or a nonresponse bias.”  

Finally, do you have any other top tips for avoiding bias in customer surveys?

“The top tip is do much more piloting than you think you need. Do pre piloting then piloting. Do some A/B testing on a small scale. It’s a very nitty gritty, getting your hands dirty kind of process so you just have to put in the grunt work.”

Conclusion

While you might actually want to sway people’s responses in a particular direction (like Lloyds did with mine), if you don’t aim for a neutral and representative survey, you’re missing the point of market research.

Robust surveys help you really understand your customers and do more than simply provide headlines for your latest PR campaign (i.e. 92% of customers say they are delighted with our service).

Done properly, surveys can offer a true temperature check for your brand and the marketplace, and can provide real, actionable insights. This data can guide your strategy, show you opportunities and prevent you from making costly mistakes – so it stands to reason you want it to be accurate.

Being aware of the possibility of response bias and nonresponse bias means you can take more care when designing your surveys, while basic testing can offer the certainty you need to base decisions on the results

To learn how Attest can help you avoid both types of bias, book a demo todayor call us on 0330 808 4746 to chat to our team of market research experts.

Bel Booker

Senior Content Writer 

Bel has a background in newspaper and magazine journalism but loves to geek-out with Attest consumer data to write in-depth reports. Inherently nosy, she's endlessly excited to pose questions to Attest's audience of 125 million global consumers. She also likes cake.

See all articles by Bel