Platform overview
Data quality
Analysis
Hybrid audience
By Use Case
Brand tracking
Consumer profiling
Market analysis
New product development
Multi-market research
Creative testing
Concept testing
Campaign tracking
Competitor analysis
Quant & qual insights
Seasonal research
By Role
Marketing
Insights
Brand
Product
UK Gen Alpha Report
US Gen Alpha Report
2025 UK Media Consumption Report
2025 US Media Consumption Report
Consumer Research Academy
Survey templates
Help center
Blog
Webinars
Careers
By Industry
Sign up to our newsletter
* I agree to receive communications from Attest. Privacy Policy.
You’re now subscribed to our mailing list to receive exciting news, reports, and other updates!
Senior Customer Research Manager
As a business, you face big decisions: Which features to prioritize, how to position your brand or where to invest your marketing budget.
The insights you gather will only be as strong as the responses behind them, which raises the question: how many people do you actually need to survey?
If your sample size is too small, your results risk being distorted by chance or outliers. Too large, and you spend more time and money than necessary. The right sample size is the balance point that makes your research both reliable and actionable.
In this article, we’ll show you five steps to work out sample size. You’ll see how factors like population, margin of error, confidence level and variability shape the number and how practical limits like budget and timelines affect what’s achievable. Along the way, we’ll share quick benchmarks and examples to help you avoid common mistakes.
Getting sample size right ensures your survey insights are reliable and actionable without wasting time or budget. The process comes down to five steps:
Practical limits like budget, time, or how easy your audience is to reach often decide what’s possible. Even a smaller, focused sample can still guide decisions, as long as you recognize the limitations.
Sample size is the number of people who take part in your research. In surveys, it’s the group of respondents whose answers you use to represent your target audience.
This target audience could be as broad as a national customer base, as focused as a regional target market, or as specific as a niche B2B segment.
In other words, your sample is the smaller slice of people you study so you can draw conclusions about the bigger population.
Sample size is not just a number. It determines whether your survey insights are credible enough to guide real-world business decisions.
Too small a sample and results could be swayed by random chance or those who buck the trend. Too large and you risk overspending time and budget for only marginal gains in accuracy.
The right sample size strengthens:
For example, if you survey only 20 customers about a new product feature and two people happen to have extreme views, your percentages shift dramatically.
Scale up to a representative sample of 500 and those outliers carry far less weight which gives you a trustworthy picture of what your audience really thinks.
That trust makes research actionable and helps you make confident decisions.
Now that you know why sample size matters, let’s look at how to figure it out in practice. The process comes down to five steps, which we’ll discuss in more detail in the sections below:
Your population size is the total number of people your research is meant to represent. It’s important to define this number because it sets the starting point for calculating how many responses are enough.
For example, in a customer satisfaction survey, that might be the number of active customers you serve. In a national brand awareness study, it could be the entire adult population of the country you want to target.
To determine population size, start with the data you already have.
In addition, always tie the population to your research goal. For instance, if you only want feedback from VIP customers, or decision-makers in mid-size firms, narrow your population to those groups.
If you cannot find an exact number, make your best estimate. For very large populations, the exact size has less impact on the calculation. Once a population goes over 20,000, increasing the population size has almost no effect on the required sample size
The key is to define the scope clearly so every other step builds on a solid foundation.
Once you know your population size, the next step is to set your confidence level and margin of error.
These two variables work together to define how precise your results will be and how confident you can be in them. Here’s what each one means in practice:
📈Confidence level shows how sure you can be that your survey results reflect the true opinions of your target population. For example, in a brand tracking survey with a 95% confidence level, you can be 95% certain your measured brand awareness is close to the true value in the population.
📈Margin of error (MOE) tells you how far off those results could be, shown as a plus–minus range. For example, if 60% of respondents prefer Concept A with a ±3% margin of error, the true preference could be between 57% and 63%.
Together, confidence level and margin of error set the precision of your survey.
Aiming for a higher confidence level or a smaller margin of error means you’ll need a larger sample size.
Confidence levels are expressed as percentages, but when you calculate sample size, you’ll use the Z-score. This is the number of standard deviations from the mean result that’s needed to achieve that confidence level.
As you can see in the table above, for example, if you want a 95% confidence level, you’ll use a z-score of 1.96.
Your chosen z-score is what you’ll plug into the sample size formula. To make it easier, here’s how those z-scores translate into minimum sample sizes at different margins of error:
Assumes a large population and a standard deviation of 0.5 (we’ll discuss this in more detail below).
These figures give you a ballpark before you run the full calculation and show how much precision costs in terms of sample size. For example, for a nationally representative survey in the US or UK, you’d need roughly 1,068 respondents to get a 3% margin of error with 95% confidence. If you only have about 600 respondents, your margin of error rises to 4% which is generally acceptable in consumer research.
“These are the minimum sample sizes I use when estimating how many respondents you need to hit a certain margin of error, assuming a 95% confidence level and maximum variability (which is the most conservative assumption):
The more precision you want, the more people you’ll need. These quick benchmarks are useful when you’re sizing up a survey and need to know how much data is enough to trust the results.”
— Brandon Talbot, Data Solutions Engineer @DataEQ
TL;DR: Confidence level and margin of error
The next factor you need to think about when calculating sample size is how much variability to expect in your survey responses (i.e. how similar or different people’s answers are). Statisticians call this variability standard deviation (SD).
Let’s explain what we mean with an example:
So, what does this mean for your sample size calculation? The more variability (the higher the SD), the larger your sample needs to be.
In the Christmas example, the SD is close to 0, so you can reduce the sample size. In the ad design example, the SD is closer to 0.50, so you’ll need more respondents.
Before you calculate sample size, you’ll need to choose a value for SD. This is always an estimate, based on past data, a pilot survey or your best judgment. If you’re unsure, use 0.50. It’s the most conservative choice and ensures you don’t undersample.
Here’s how different levels of variability translate into required sample sizes (assuming a 95% confidence level, ±5% margin of error, and a population over 20,000):
TL;DR: The more varied your audience’s answers, the bigger your sample needs to be. Because you rarely know variability upfront, you have to estimate it. If in doubt, use 0.50 as it assumes maximum variability in your sample.
Once you’ve set your population size, margin of error, confidence level and estimated variability, you can calculate your sample size using Cochran’s formula.
It’s important to note that this formula assumes that you have an infinite population. You’ll need to adjust it. But we’ll show you how to do that below.
Where:
Example:
Step 1:
First, calculate initial sample size:
Step 2:
Because Cochran’s formula assumes an infinite population, you’ll need to adjust the calculation for a finite population to avoid oversampling:
✅ Result: You need 357 respondents for this survey to achieve a 95% confidence level with ±5% margin of error.
Skip the math and get your sample size in seconds
No formulas. No spreadsheets. Just plug in your audience details and our free sample size calculator will tell you exactly how many responses you need for confident, credible results.
Want the shortcut? These five steps show exactly how to calculate sample size, with a quick explanation of why each one matters and examples to guide you.
Even if you have calculated the “perfect” sample size, real-world factors almost always shape the final number.
Time, budget, audience reach and your chosen research method all put boundaries on what is possible. Being aware of these constraints helps you set realistic expectations and avoid overpromising what your survey can deliver.
Recruiting and collecting responses can take time, especially if you’re not using a panel. A large consumer study might need weeks to hit thousands of completes but a campaign test may only allow a few days.
Short timelines often mean trade-offs: you may have to accept a higher margin of error or limit how many subgroups you report on.
For example, you might drop from reporting five age bands to only “under 35 vs 35+” because you will not have enough responses in each subgroup.
Workarounds include staggered sampling (running smaller waves and combining results later) or using pre-recruited research panels (Like Attest!) that deliver faster turnaround.
Sample size is not just about statistics. It is also about cost. Most survey providers charge per completed response so increasing your sample costs more.
If budgets are tight, focus on the most important groups and metrics rather than trying to cover everything.
You can also optimize by using representative panels instead of buying niche lists or by relaxing your margin of error slightly (for example ±5% instead of ±3%), which lowers the number of responses required without hurting quality too much.
Some groups are easy to reach while others are not. A nationally representative sample of 1,000 consumers is straightforward, but finding 300 CTOs in mid-sized fintechs is not.
When your audience is small or niche, the “ideal” sample size may not be feasible. In these cases, smaller but well-targeted samples can still provide useful guidance, as long as you’re upfront about their limitations.
The way you collect data changes how many responses you actually need. Some research methods are about getting solid numbers you can trust, while others are about digging deeper into opinions and themes.
Quantitative surveys:
Qualitative research:
The key is to match your sample size to your method. Use more respondents when you need precise numbers and fewer when you want stories and context.
Every big business decision relies on research you can trust. Getting sample size right makes that possible.
A sample that’s too small lets random chance skew your results. A sample that’s too large wastes time and budget for only marginal gains in accuracy.
The “right” number depends on your confidence level, margin of error, variability and real-world constraints like budget and timelines.
The good news is that you don’t have to guess. By following a simple 5-step process, you calculate the right sample size for any project and gain confidence in the insights you collect.
Ready to get started? Use Attest’s sample size calculator to find your exact number, or explore our guide to representative samples to make sure your audience truly reflects the people you want to reach.
Sample size directly affects how much you can trust your survey results. If your sample is too small, results may be skewed by outliers or random chance, making your insights unreliable. If it’s too large, you’re likely spending more time and money than necessary for only small gains in accuracy. The right sample size strikes the balance: Large enough to give you confidence in the results, but efficient enough to fit your budget and timeline.
Confidence level tells you how sure you can be that your survey results reflect the wider population. Margin of error shows how much those results could differ from reality. Increasing your confidence level or shrinking your margin of error both require larger sample sizes. For instance, moving from ±5% to ±3% margin of error nearly triples the number of respondents needed. Most research balances practicality by using a 95% confidence level with a margin of error between ±3% and ±5%.
You can calculate the ideal sample size in five steps:
1) Define your total population (the group you want results to represent).
2) Choose your margin of error (how precise you want the results to be).
3) Choose your confidence level and find the matching z-score.
4) Estimate variability (how diverse you expect responses to be).
5) Run the numbers using Cochran’s formula and adjust for finite populations if needed.
This process ensures your sample is grounded in both statistics and practical considerations. If you don’t want to run the math yourself, you can use Attest’s free sample size calculator to get an instant answer by plugging in your population, margin of error and confidence level.
Nikos joined Attest in 2019, with a strong background in psychology and market research. As part of Customer Research Team, Nikos focuses on helping brands uncover insights to achieve their objectives and open new opportunities for growth.
Tell us what you think of this article by leaving a comment on LinkedIn.
Or share it on:
16 min read
13 min read
17 min read
Get Attest’s insights on the latest trends trends, fresh event info, consumer research reports, product updates and more, straight to your inbox.
You're now subscribed to our mailing list to receive exciting news, reports, and other updates!