WTF do things like statistical significance, confidence levels & margin of error mean? your no BS guide

If you’re new to the world of consumer intelligence, you might feel like you’re swimming through a confusing sea of jargon that sounds like it belongs in a science lab, not your marketing team. This is your no BS guide to navigating the world of consumer intelligence.

If you’re new to the world of consumer intelligence, you might feel like you’re swimming through a confusing sea of jargon that sounds like it belongs in a science lab, not your marketing team. At best, ‘confidence interval’ or ‘statistical significance’, might rustle up long-since-forgotten memories of GCSE maths.

Our mission at Attest is to make the power to know available for everyone, whatever the company and whatever your job title. If you need to talk to your consumers, learn more about your markets, conduct effective survey scripting, or gain unique insights into competitors, then we can help.

But in doing so, you might come across these words that put you right off the idea – you (probably) don’t have a doctorate in advanced mathematics, after all.

It needn’t be so. These terms sound complicated but the ideas behind them are simple. In some cases, even, you might be able to ignore them entirely (with no detriment to your results and the decisions you make).

This article will set out the meanings of some of the key terminology you’ll come across in the market research sector, whether or not you should care about those concepts, and if you should, then how to calculate them and what this means for your surveys.

What is statistical significance

All the terms we’ll explore in this article hinge on this term because if you don’t care about your survey being statistically significant, you can pretty much ignore all the other concepts that underpin it.

So what is it? The usual response is something like this:

“Statistical significance is the probability of finding a given deviation from the null hypothesis -or a more extreme one- in a sample. Statistical significance is often referred to as the p-value (short for “probability value”) or simply p in research papers.”

In plain English, statistical significance means that a survey that meets criteria for statistical significance is a survey where the results are likely to be true. There’s a low chance that the results of the survey came about by chance or randomness.

A statistically significant survey, then, allows you to conclude that the results that came about were because of the action you took and not the work of random circumstances.

Before we steamroller ahead to an explanation of the criteria that make up a statistically significant survey, it’s worth pausing and considering whether you should even care if your survey meets them or not.

Should you care about statistical significance?

‘Yes’, you’re thinking. You want to know that the results of the survey are true and not coincidental, so of course you should care about statistical significance.

But this isn’t always the case.

There are a number of use cases where statistical significance isn’t necessary in order for you to draw important conclusions from the data.

At this point, it’s worth highlighting that statistical significance doesn’t guarantee that the results of your survey are important or useful. Colloquial use of the term tends to blur ‘statistical significance’ with ‘important’, ‘useful’, ‘valuable’ and other words that have more do with asking the right questions than with the veracity of the data.

Click here for a comprehensive guide to survey creation, including tips and tricks for successful survey scripting

In other words, statistical significance doesn’t make your survey important.

Context is key. While statistical significance plays a large role in the confidence you can place in the  data you’ve gathered, how it was gathered, the questions asked, and what you do with the insights are all asimportant for understanding the value of the results.

A bad survey – with a statistically significant data set – is still a bad survey!

So, you’re conducting consumer research to get valuable data, but does that data really need to be statistically significant?

Not always, is the short answer.

It might be worthwhile asking yourself: does the value of the survey lie in knowing that the results apply to the rest of the population, or does the value lie in the nature of each response? If you’re conducting a qualitative-heavy survey, you might be leaning towards the latter.

For instance, if you want in-depth feedback on a new creative, gathering a handful of real, qualitative responses could provide more than enough guidance. While not statistically significant, the data is still highly valuable.

Another case in which statistical significance might pale into insignificance is when you’re looking for just a handful of insights to quickly sense-check an idea, or to confirm that you’re heading in the right direction. For example, some fast market insight ahead of your agency’s first chemistry meeting with a prospective client.

Yet another case might be if you’re unsure of the size of the audience, when you’re surveying consumers to discover where there are pockets of interest for an initial new product idea. Without a firm grasp of the size of the population, you can’t establish the sample size needed for statistical significance anyway.

If you’re conducting any of these forms of surveys, and need help choosing the right sample size without the guide of statistical significance, our team at Attest are more than happy to help.

On the other hand, if the value of the data does lie in the likelihood of the data matching that of the rest of the population (or the broader consumer group you’re targeting), then statistical significance is vital for your survey.

How to achieve a statistically significant survey

Statistical significance is built on a number of other, equally science-y sounding words.

You’ll need to establish that the sample size you have selected to send your survey to will achieve the statistical significance you are aiming for.

This example should indicate the importance of sample size for being able to apply your results to the rest of the population, (who you haven’t surveyed):

Say you survey 10 people in the street, asking how often they use Facebook. 9 of the 10 you speak to (it can be hard to get people to stop and talk to you!) say they do use Facebook. In fact, only just over 50% of the UK population do use the social media site. But that’s still 39.2 million people, so the chance of finding 9 on one High Street is quite high, and a feasible circumstance you might find yourself in.

Having the right sample size will increase the likelihood that your results represent the rest of the population (statistical significance).

This is where these additional concepts come in.

Margin of error

Firstly, margin of error, also known as the confidence interval (they are interchangeable, just to be even more confusing!), is the range through which the results you receive should be reflected in the population.

The industry-wide accepted margin of error is 5%. This isn’t a hard and fast rule, though, and you can choose to be more confident that your results reflect the views of the entire population (in which case you’ll need a larger sample size) or less confident (which will reduce the sample size required).    

A 5% margin of error means that, if 70% of people taking your survey said they would ‘definitely’ go and buy your new product if launched next week, then you would expect 65%-75% of the population to actually go out and buy your product. So it acts like a spread.

Had you reduced the margin of error (aka confidence interval) to just 3%, you’d expect your results to be even more accurate (creating a tighter spread of 67%-73%).

For many things, a 5% delta either way is fine. However if you were accurately forecasting a new product launch, where you need to sell a million units to break even in year one, you might need to have a more accurate forecast, and therefore you’d demand a smaller margin or error.

Confidence level

So a statistically significant survey is one that has the desired sample size, dictated by the margin of error you’re willing to work to? Not quite there yet.

We still need the percentage to which we can be confident that the results in the full population would range from 65-75%.

This is called the confidence level (not to be confused with confidence interval, aka margin of error!).

You can’t always be 100% sure that the results reflect the views of the larger population, even within the range dictated by the margin of error, because there may be niche views unaccounted for in the sample. So confidence level is the confidence you have that your results (between the margin of error dictated band) are reproducible and not an outlier.

Again, the industry-accepted standard is 95%. A confidence level of 95% means that if you ran the same survey, to the same target audience, 100 times, the results would come back the same 95 times.

98% and 99% confidence levels are also popular choices (meaning the chance of a freak survey yielding incorrect results are incredibly slim). However it also means you also need much larger sample sizes.

Combined with different margins of error, you can reach a sample size that will deliver you statistically significant results (i.e. ones that are likely to be true, and not the result of chance.

These terms, and lots more are explained further in our Market Research Glossary

Handy examples

There are a wealth of sample size calculators that have been published on the internet, to take the algebra out of your hands.

Here are some popular populations and the effect that changing margin of error and confidence level has on the sample size required to meet that statistical significance:

Population NamePopulation SizeMargin of ErrorConfidence LevelSample Size Required for Statistical Significance
UK population66,000,00010%80%41 – yes, that small!
London population8,100,0005%85%208
UK Millennials16,200,0002%90%1,681
UK Facebook users39,200,0001%99%16,634

The next steps…

It’s worth understanding all of these terms, so you can appreciate the importance of appropriate sample sizes, to help ensure you’re not overpaying for unnecessary sample or underpaying and getting poor quality results. It’s also valuable to understand that, if you have the budget available, you can increase your confidence above the industry-standards by increasing sample size.

However, the industry standards are appropriate and effective for the majority of consumer intelligence projects, so the only additional information you’d need to input into one of the many sample size calculators out there is the size of the population of your target market.

If you’re in any doubt about the sample size you should be aiming for, whether statistical significance should dictate your sample size, or how to calculate your population size, get in touch with the Attest team.

Attest

Content Team 

Our in-house marketing team is always scouring the market for the next big thing. This piece has been lovingly crafted by one of our team members. Attest's platform makes gathering consumer data as simple and actionable as possible.

See all articles by Attest