Blog > Articles >
Estimated reading time:16 min read

How to determine sample size in five steps

How to determine sample size

As a business, you face big decisions: Which features to prioritize, how to position your brand or where to invest your marketing budget. 

The insights you gather will only be as strong as the responses behind them, which raises the question: how many people do you actually need to survey?

If your sample size is too small, your results risk being distorted by chance or outliers. Too large, and you spend more time and money than necessary. The right sample size is the balance point that makes your research both reliable and actionable.

In this article, we’ll show you five steps to work out sample size. You’ll see how factors like population, margin of error, confidence level and variability shape the number and how practical limits like budget and timelines affect what’s achievable. Along the way, we’ll share quick benchmarks and examples to help you avoid common mistakes.

TL;DR

Getting sample size right ensures your survey insights are reliable and actionable without wasting time or budget. The process comes down to five steps:

  1. Define your population: Who you want results to represent.
  2. Set your margin of error: The precision you need (±3–5% is typical).
  3. Pick your confidence level: Usually 95% (Z = 1.96).
  4. Estimate variability: The more diverse the answers, the bigger the sample required (use 0.5 if unsure).
  5. Run the calculation: Use Cochran’s formula and adjust for finite populations if needed. If you don’t want to do this manually, you can use Attest’s calculator to work out your sample size in seconds. 

Practical limits like budget, time, or how easy your audience is to reach often decide what’s possible. Even a smaller, focused sample can still guide decisions, as long as you recognize the limitations.

What is sample size?

Sample size is the number of people who take part in your research. In surveys, it’s the group of respondents whose answers you use to represent your target audience. 

This target audience could be as broad as a national customer base, as focused as a regional target market, or as specific as a niche B2B segment.

In other words, your sample is the smaller slice of people you study so you can draw conclusions about the bigger population.

Why does sample size matter?

Sample size is not just a number. It determines whether your survey insights are credible enough to guide real-world business decisions. 

Too small a sample and results could be swayed by random chance or those who buck the trend. Too large and you risk overspending time and budget for only marginal gains in accuracy.

The right sample size strengthens:

  • Reliability of results: Findings stay consistent if the research is repeated with a similar group.
  • Margin of error: The difference between your survey results and the true views of the wider population.
  • Confidence level: How sure you can be that your results reflect the wider population.

For example, if you survey only 20 customers about a new product feature and two people happen to have extreme views, your percentages shift dramatically. 

Scale up to a representative sample of 500 and those outliers carry far less weight which gives you a trustworthy picture of what your audience really thinks. 

That trust makes research actionable and helps you make confident decisions. 

How to calculate your sample size (step-by-step)

Now that you know why sample size matters, let’s look at how to figure it out in practice. The process comes down to five steps, which we’ll discuss in more detail in the sections below: 

  • Define your population size.
  • Choose your margin of error (MOE)
  • Choose your confidence level and find the z-score
  • Estimate variability
  • Calculate sample size

1. Define your population size

Your population size is the total number of people your research is meant to represent. It’s important to define this number because it sets the starting point for calculating how many responses are enough. 

For example, in a customer satisfaction survey, that might be the number of active customers you serve. In a national brand awareness study, it could be the entire adult population of the country you want to target.

To determine population size, start with the data you already have. 

  • For customer or employee surveys, check your CRM, subscription records or HR database. 
  • For wider market studies, use external sources like census data, government statistics or industry reports. 

In addition, always tie the population to your research goal. For instance, if you only want feedback from VIP customers, or decision-makers in mid-size firms, narrow your population to those groups.

If you cannot find an exact number, make your best estimate. For very large populations, the exact size has less impact on the calculation. Once a population goes over 20,000,  increasing the population size has almost no effect on the required sample size 

The key is to define the scope clearly so every other step builds on a solid foundation.

 Step 2 and 3: Choose your confidence level and margin of error 

Once you know your population size, the next step is to set your confidence level and margin of error. 

These two variables work together to define how precise your results will be and how confident you can be in them. Here’s what each one means in practice:

📈Confidence level shows how sure you can be that your survey results reflect the true opinions of your target population. For example, in a brand tracking survey with a 95% confidence level, you can be 95% certain your measured brand awareness is close to the true value in the population.

📈Margin of error (MOE) tells you how far off those results could be, shown as a plus–minus range. For example, if 60% of respondents prefer Concept A with a ±3% margin of error, the true preference could be between 57% and 63%.

Together, confidence level and margin of error set the precision of your survey. 

Aiming for a higher confidence level or a smaller margin of error means you’ll need a larger sample size. 

Confidence levels are expressed as percentages, but when you calculate sample size, you’ll use the Z-score. This is the number of standard deviations from the mean result that’s needed to achieve that confidence level.

Confidence levelz-score
80%1.282
90%1.645
95%1.96
99%2.576

As you can see in the table above, for example, if you want a 95% confidence level, you’ll use a z-score of 1.96.

Your chosen z-score is what you’ll plug into the sample size formula. To make it easier, here’s how those z-scores translate into minimum sample sizes at different margins of error:

Confidence level±5.0%±3.0%±2.5%±2.0%±1.0%
80%1654576571,0274,106
90%2717521,0831,6916,764
95%3851,0681,5372,4019,604
99%6641,8442,6544,14716,588

Assumes a large population and a standard deviation of 0.5 (we’ll discuss this in more detail below).

These figures give you a ballpark before you run the full calculation and show how much precision costs in terms of sample size. For example, for a nationally representative survey in the US or UK, you’d need roughly 1,068 respondents to get a 3% margin of error with 95% confidence. If you only have about 600 respondents, your margin of error rises to 4% which is generally acceptable in consumer research.

“These are the minimum sample sizes I use when estimating how many respondents you need to hit a certain margin of error, assuming a 95% confidence level and maximum variability (which is the most conservative assumption):

  • ±5.0%: 385 respondents
  • ±3.0%: 1,068 respondents
  • ±2.5%: 1,537 respondents
  • ±2.0%: 2,401 respondents
  • ±1.66%: 3,485 respondents
  • ±1.0%: 9,604 respondents

The more precision you want, the more people you’ll need. These quick benchmarks are useful when you’re sizing up a survey and need to know how much data is enough to trust the results.”


Brandon Talbot, Data Solutions Engineer @DataEQ

TL;DR: Confidence level and margin of error

  • You need to choose both a confidence level and a margin of error before calculating sample size.
  • Most researchers use a 95% confidence level (Z-score = 1.96).
  • A typical margin of error is ±3% to ±5%. Smaller margins of error give you more precise results, but they also require bigger sample sizes.
  • Higher confidence levels also increase the number of responses you need.

Step 4: Estimate variability in your audience (p)

The next factor you need to think about when calculating sample size is how much variability to expect in your survey responses (i.e. how similar or different people’s answers are). Statisticians call this variability standard deviation (SD)

 Let’s explain what we mean with an example:

  • Low variability (e.g. SD ≈ 0.10) means most people give the same answer. For example, say you ask “Which month is Christmas in?” nearly everyone says December. With responses so tightly clustered, even a small sample will give you reliable results.
  • High variability (e.g. SD ≈ 0.5) means answers are all over the place. For example, say you test five different ad designs across multiple age groups and see strong preferences in different directions. With opinions so divided, you’ll need a much larger sample to be confident in the outcome.

So, what does this mean for your sample size calculation? The more variability (the higher the SD), the larger your sample needs to be. 

In the Christmas example, the SD is close to 0, so you can reduce the sample size. In the ad design example, the SD is closer to 0.50, so you’ll need more respondents.

Before you calculate sample size, you’ll need to choose a value for SD. This is always an estimate, based on past data, a pilot survey or your best judgment. If you’re unsure, use 0.50. It’s the most conservative choice and ensures you don’t undersample.

Here’s how different levels of variability translate into required sample sizes (assuming a 95% confidence level, ±5% margin of error, and a population over 20,000):

Standard deviation (p)Variability descriptionApprox. required sample size
0.50Maximum variability (opinions evenly split)384 respondents
0.40High variability246 respondents
0.30Moderate variability138 respondents
0.20Low variability62 respondents
0.10Very low variability25 respondents

TL;DR: The more varied your audience’s answers, the bigger your sample needs to be. Because you rarely know variability upfront, you have to estimate it. If in doubt, use 0.50 as it assumes maximum variability in your sample. 

Step 5: Calculate sample size 

Once you’ve set your population size, margin of error, confidence level and estimated variability, you can calculate your sample size using Cochran’s formula. 

It’s important to note that this formula assumes that you have an infinite population. You’ll need to adjust it. But we’ll show you how to do that below. 

Where:

  • n₀ = initial sample size (assuming an infinite population)
  • z = z-score based on confidence level (1.96 for 95%, 1.64 for 90%, 2.576 for 99%)
  • p = estimated proportion of variability (0.5 if unknown)
  • e = margin of error in decimal form (e.g., 0.03 for ±3%)

Example:

  • Population size (N): 5,000 customers
  • Confidence level: 95% (Z = 1.96)
  • Estimated variability: p = 0.5
  • Margin of error: ±5% ( e = 0.05)

Step 1: 

First, calculate initial sample size: 

Step 2: 

Because Cochran’s formula assumes an infinite population, you’ll need to adjust the calculation for a finite population to avoid oversampling: 

✅ Result: You need 357 respondents for this survey to achieve a 95% confidence level with ±5% margin of error.

Skip the math and get your sample size in seconds

No formulas. No spreadsheets. Just plug in your audience details and our free sample size calculator will tell you exactly how many responses you need for confident, credible results.

Calculate my sample size

TL;DR: How to calculate your sample size

Want the shortcut? These five steps show exactly how to calculate sample size, with a quick explanation of why each one matters and examples to guide you.

StepWhat to doWhy it mattersExample
1. Define your population size (N)Work out the total number of people you want your research to represent.Determines whether you need to adjust for a finite population in your calculation.For a customer satisfaction survey, this could be your 5,000 active customers. For a national brand awareness study, it might be the 53 million UK adults.
2. Choose your margin of error (e)Decide the range you’re comfortable with for potential error in your results (e.g., ±3% or ±5%).Smaller MOE = more precision but a larger sample needed. Larger MOE = less precision but a smaller sample needed.±3% MOE: If 60% of respondents prefer one ad, the real number could be between 57% and 63%.
3. Choose your confidence level and find the z-score (Z)Pick how certain you want to be that your results reflect the true population, then match it to the correct z-score.The z-score is essential for the sample size formula; it sets the width of your confidence range.Confidence level to z-score mapping: 90% (1.645), 95% (1.960) 99% (2.576).
For example, 95% confidence means you can be 95% certain results fall within your margin of error.
4. Estimate variability (p)Predict how much opinions will differ.Higher variability = larger sample needed.If unsure, use p = 0.5 for a safe (conservative) estimate. For example, p = 0.5 assumes a 50/50 split in opinion, which requires the largest sample size.
5. Calculate using a formula or calculatorUse Cochran’s formula for large populations, then adjust for finite populations if needed.Give you the exact sample size you need.Formula: n₀ = (Z² × p × (1-p)) / e².
Example: population = 5,000, Confidence = 95% (Z=1.96), MOE = ±5% (e=0.05), p = 0.5, n₀ = 384.
Adjusted for population = 357 respondents.

Real world constraints to consider 

Even if you have calculated the “perfect” sample size, real-world factors almost always shape the final number. 

Time, budget, audience reach and your chosen research method all put boundaries on what is possible. Being aware of these constraints helps you set realistic expectations and avoid overpromising what your survey can deliver.

Timeline

Recruiting and collecting responses can take time, especially if you’re not using a panel. A large consumer study might need weeks to hit thousands of completes but a campaign test may only allow a few days. 

Short timelines often mean trade-offs: you may have to accept a higher margin of error or limit how many subgroups you report on. 

For example, you might drop from reporting five age bands to only “under 35 vs 35+” because you will not have enough responses in each subgroup.

Workarounds include staggered sampling (running smaller waves and combining results later) or using pre-recruited research panels (Like Attest!) that deliver faster turnaround.

Budget

Sample size is not just about statistics. It is also about cost. Most survey providers charge per completed response so increasing your sample costs more. 

If budgets are tight, focus on the most important groups and metrics rather than trying to cover everything. 

You can also optimize by using representative panels instead of buying niche lists or by relaxing your margin of error slightly (for example ±5% instead of ±3%), which lowers the number of responses required without hurting quality too much.

Target population

Some groups are easy to reach while others are not. A nationally representative sample of 1,000 consumers is straightforward, but finding 300 CTOs in mid-sized fintechs is not. 

When your audience is small or niche, the “ideal” sample size may not be feasible. In these cases, smaller but well-targeted samples can still provide useful guidance, as long as you’re upfront about their limitations.

Research method

The way you collect data changes how many responses you actually need. Some research methods are about getting solid numbers you can trust, while others are about digging deeper into opinions and themes.

Quantitative surveys: 

  • These use closed questions like multiple choice or rating scales. 
  • Larger samples deliver more reliable results and support subgroup comparisons. 
  • If you want to measure brand awareness or test creative concepts, you need to collect the full number of responses your sample size calculation called for. That ensures your numbers are accurate enough to guide decisions.

Qualitative research: 

  • This includes interviews, focus groups and open-ended questions. 
  • Smaller samples work because the goal is depth, not percentages. 
  • Talking to 20 well-chosen customers can reveal themes and motivations you cannot get from tick-box surveys.

The key is to match your sample size to your method. Use more respondents when you need precise numbers and fewer when you want stories and context.

The right sample size is your shortcut to better decisions 

Every big business decision relies on research you can trust. Getting sample size right makes that possible. 

A sample that’s too small lets random chance skew your results. A sample that’s too large wastes time and budget for only marginal gains in accuracy. 

The “right” number depends on your confidence level, margin of error, variability and real-world constraints like budget and timelines.

The good news is that you don’t have to guess. By following a simple 5-step process, you calculate the right sample size for any project and gain confidence in the insights you collect.

Ready to get started? Use Attest’s sample size calculator to find your exact number, or explore our guide to representative samples to make sure your audience truly reflects the people you want to reach.

Sample size directly affects how much you can trust your survey results. If your sample is too small, results may be skewed by outliers or random chance, making your insights unreliable. If it’s too large, you’re likely spending more time and money than necessary for only small gains in accuracy. The right sample size strikes the balance: Large enough to give you confidence in the results, but efficient enough to fit your budget and timeline.

Confidence level tells you how sure you can be that your survey results reflect the wider population. Margin of error shows how much those results could differ from reality. Increasing your confidence level or shrinking your margin of error both require larger sample sizes. For instance, moving from ±5% to ±3% margin of error nearly triples the number of respondents needed. Most research balances practicality by using a 95% confidence level with a margin of error between ±3% and ±5%.

You can calculate the ideal sample size in five steps:

1) Define your total population (the group you want results to represent).

2) Choose your margin of error (how precise you want the results to be).

3) Choose your confidence level and find the matching z-score.

4) Estimate variability (how diverse you expect responses to be).

5) Run the numbers using Cochran’s formula and adjust for finite populations if needed.

This process ensures your sample is grounded in both statistics and practical considerations. If you don’t want to run the math yourself, you can use Attest’s free sample size calculator to get an instant answer by plugging in your population, margin of error and confidence level.

Nikos Nikolaidis

Senior Customer Research Manager 

Nikos joined Attest in 2019, with a strong background in psychology and market research. As part of Customer Research Team, Nikos focuses on helping brands uncover insights to achieve their objectives and open new opportunities for growth.

See all articles by Nikos