Platform overview
Data quality
Analysis
Hybrid audience
By Use Case
Brand tracking
Consumer profiling
Market analysis
New product development
Multi-market research
Creative testing
Concept testing
Campaign tracking
Competitor analysis
Quant & qual insights
Seasonal research
By Role
Marketing
Insights
Brand
Product
UK Gen Alpha Report
US Gen Alpha Report
2025 UK Media Consumption Report
2025 US Media Consumption Report
Consumer Research Academy
Survey templates
Help center
Blog
Webinars
Careers
By Industry
Having the right sample size is critical in survey design. Too small, and your results could be misleading. Too large, and you risk wasting time and budget.
That’s why our in-house research experts created the Attest sample size calculator – a free, easy-to-use tool that takes the guesswork out of determining how many responses you need to gather meaningful insights.
This guide will walk you through what sample size means, why it matters and the key factors that influence it, from confidence levels to survey types. Whether you’re brand tracking, testing creative, or exploring new markets, we’ve got you covered.
Sample size in market research is the number of respondents who participate in your survey. These respondents represent a portion of a population or target group whose opinions and behaviors you want to analyze as part of your research.
Your sample size directly impacts how accurate and meaningful your survey results will be. A “good” sample size provides reliable results with your required margin of error. It balances the practicalities of conducting the survey with your research requirements.
When your sample size is too small, it may lead to findings that aren’t representative of your target market. On the other hand, an unnecessarily large sample size might waste resources without providing meaningful improvement in accuracy.
When performing a sample size calculation, you need to consider the context of your target audience and the type of research you’re running. At the same time, it’s important to factor in the level of robustness and accuracy you’re seeking from your sample. Let’s examine these in more detail below.
Survey type simply refers to the method you use to collect responses. We go into greater detail on survey types in the section below, but in a nutshell, different types impact response rates, data quality and how large your sample size needs to be.
ℹ️ For example: Brand tracking surveys require large samples to reduce the natural flux in data over time. Whereas creative and concept testing, whether monadic (evaluating one concept) or sequential monadic (evaluating multiple concepts), are often specific enough that smaller sample sizes help you achieve your goal of deciding which creative concept to launch.
The population size is the total number of people you want to study or draw insights about in your market research.
Let’s say a beauty brand wants to research the viability of launching an all-natural skincare line – their target population could consist of women between the ages of 25-40 years in a specific geographic location. The brand would need to know the overall population size to determine the ideal representative sample to survey.
💡Pro-tip: The Attest survey sample size calculator has the populations of popular survey countries pre-programmed into the tool. You can also input a custom audience number to perform your calculation.
When you run a survey, your results are based on a sample – not the whole population. That means there’s always some uncertainty in the results. Two terms help express that uncertainty: margin of error and confidence interval.
Here’s how they work:
▪️The margin of error is the amount your result could be off, in either direction. It’s a single number, like ±5%.
▪️The confidence interval is the full range you get when you apply that margin of error to your result. So if 60% of people say they like a product, and your margin of error is ±5%, your confidence interval is 55% to 65%.
Think of it like this:
📊Confidence interval = result ± margin of error
So while people often use the two terms interchangeably, they’re not exactly the same. The margin of error is the “plus or minus” part; the confidence interval is the actual range.
When using our sample size calculator, remember that lowering your margin of error (for more precise results) means you’ll generally need a bigger sample size.
The confidence level tells you how sure you can be that your survey results reflect the views of the overall population.
For example, a 95% confidence level means that if you repeated your survey 100 times, you’d expect the results to fall within the margin of error in at least 95 of them. The remaining 5 might miss the true population value simply due to random sampling variation – as you can see in the example below:
Source
This doesn’t mean there’s a 95% chance the result is right. It means the method you’re using to calculate the interval would include the true value 95% of the time if repeated over and over.
Higher confidence levels, such as 99%, give you more certainty but require a larger sample size. Lower levels, like 90%, reduce your sample size needs, which can be helpful when you’re working with limited budgets, need faster results, or are only aiming for directional insights.
Standard deviation measures how much variation there is in your data – or how “spread out” people’s answers are.
In survey sampling, it helps estimate how consistent your results are likely to be. A low standard deviation means most responses are close to the average. A high standard deviation means the answers are more spread out.
Let’s say a beauty store surveys 10 customers about how much they spend each month. The average is $57, but one person says they spend $100. That $100 is far from the average, which increases the standard deviation.Why does this matter? Because more variation means you may need a larger sample size to get accurate results. If people give widely different answers, you’ll need more responses to confidently understand the average.
💡Pro-tip: In general, 0.5 is considered a good standard deviation to set on your calculator.
Here’s the sample size formula that Attest’s tool uses to determine your sample size. Note that the value shown in the calculator is the finite size, calculated using the equations below and rounded up to the nearest whole number, which is typical for this statistical calculation
e is the margin of error, p is the population proportion, N is the population size and z is the z-score.
A z-score is the number of standard deviations (the variation) the value is from the mean. Use the figures in the table below to find the right z-score to input into the formula:
We mentioned that different survey types impact how large your sample sizes need to be. Here are more detailed sample size considerations for typical market research purposes:
➡️ Helps you answer: How is my brand doing?Sample size considerations:Go large: We recommend a minimum sample size of 1,000 (n=1,000). Brand trackers are typically based on bigger audiences to reduce the level of natural flux in data. We also recommend smaller margins of error in our calculator for brand tracking.Break it down: What subgroups are you interested in? For example, do you want to know what target females aged 18-34 years think about your brand? Make sure you account for how these might turn out given your key questions.Think long term: Aim to keep your sample size consistent over time when running a brand tracker. You’ll need to regularly apply some form of fresh sample, and exclude previous recent respondents as this may impact the available audience size.Is it feasible: The more niche your audience is, the more likely that you’ll need a smaller, yet robust sample size.
➡️ Helps you answer: Which concept or creative should I launch and why?
Sample size considerations:
Monadic or sequential monadic testing: On the whole, Monadic testing (when you place individual concepts or creatives in separate surveys or‘cells’), is less biased and provides a deeper understanding of concept performance. Sequential monadic (when multiple concepts or creatives are placed in the same cell) is typically used with niche audiences where many tests might not be possible. However, this method isn’t as effective when concepts or creatives are similar, or when testing more than two concepts in a single test. Where possible carry out monadic testing for the most unbiased data.
Quick tests: Traditional research agencies might recommend a sample size of 100-150 (n=100-150). This is because they take much longer to fill surveys. At Attest, we fill surveys much quicker and recommend n=250 per cell. This gives you more robust data and confidence in the results you get back.
➡️ Helps you answer: What are the attitudes and behaviors of my (potential) audience?
Go large: Aim for a larger sample size, especially if you’re unsure which subgroups or personas might emerge. A robust overall sample allows you to segment the data and uncover meaningful differences in how specific groups think or behave.
Is it feasible: Accurate consumer profiling is only possible when your sample is large enough to support meaningful subgroup analysis.
We recommend implementing these best practices to get the best possible results during your sample size calculation:
✅ Match sample size to your survey’s purpose: If you’re aiming for broad, data-driven decisions, go bigger. If you just need directional insight or qualitative feedback, a smaller, focused sample can be enough.
✅ Be aware of the “peak robustness” threshold: Higher accuracy requires more respondents. However, more people mean more time and cost. Find the sweet spot based on your budget, timeline and how critical the results are.
✅ Don’t overlook the value of small samples: Even without statistically significant results, small samples can reveal useful patterns, especially with niche audiences or early-stage exploration.
✅ Plan around open-ended questions: Open-ended questions take longer to answer, potentially slowing survey completion and reducing response rates. If you use many, alter your expected sample size to compensate for drop-offs. On the plus side, open-ended questions can produce insights you weren’t expecting.
Choosing the right sample size is essential for gathering reliable, actionable insights from your survey. Whether you’re running a brand tracker or testing creative concepts, Attest’s calculator produces the ideal sample size to ensure your research is statistically sound and cost-effective.
We recommend taking a deeper dive into the statistical terminology and concepts that power sound survey decisions. However, your bases are covered when using our tool, as it factors in components like survey type, confidence level and standard deviation to perform calculations.
We know there are a few considerations for choosing the right sample for your study. While this quick guide gives you key points to take into account, our Customer Research Team is always here to help.