Est. reading time: 8 min read

How to Calculate Survey Response Rate (With Examples and Formula)

survey methodologyresponse ratedata qualitysurvey metricsbest practices

Response rate calculation requires methodological choices: partial completes, eligibility, and contact failures. AAPOR definitions, formulas, and common errors.

How to Calculate Survey Response Rate (With Examples and Formula)

Response rate = completed surveys ÷ eligible sample. The formula is simple. Defining "completed" and "eligible" is where it gets complicated.

Response rate is the most commonly reported survey metric and the most commonly miscalculated. Teams report "65% response rate" without specifying what counts as a response, how they handled partial completions, or whether they excluded ineligible contacts.

This matters because response rate affects how you interpret your data. A 20% response rate from a probability sample might be excellent. A 70% response rate that counts partial completions as "responses" might be misleading.

This guide covers the standard formulas, the AAPOR definitions that academic research uses, and the common mistakes that inflate or deflate reported rates.

TL;DR:

  • Basic formula: Response Rate = Complete Responses ÷ Eligible Sample
  • AAPOR standards define six response rate calculations with varying strictness.
  • Partial completions need a consistent rule: count them, exclude them, or weight them.
  • Ineligible contacts (wrong numbers, out-of-scope respondents) should be excluded from the denominator.
  • Unknown eligibility is the tricky case—AAPOR provides formulas for estimating this.
  • Response rate ≠ data quality. A high rate with biased non-response can be worse than a low rate with random non-response.

→ Track Response Rates with Lensym

The Basic Formula

The simplest response rate calculation:

Response Rate = (Number of Complete Responses ÷ Number of Survey Invitations) × 100

Example:

  • You send 1,000 survey invitations
  • 250 people complete the survey
  • Response Rate = (250 ÷ 1,000) × 100 = 25%

This works for simple cases. But real surveys have complications:

  • What about partial completions?
  • What if some invitations bounced or went to wrong numbers?
  • What if some recipients weren't eligible for the survey?

AAPOR Standard Definitions

The American Association for Public Opinion Research (AAPOR) defines six response rate formulas used in academic and professional research. The differences come down to how you handle partial completions and unknown eligibility.

Response Rate 1 (RR1) — Most Strict

Formula:

RR1 = Complete Interviews ÷ (Complete + Partial + Refusals + Non-contacts + Unknown Eligibility)

Only complete interviews count as responses. Everything else—including partial completions—goes in the denominator.

Use when: You need the most conservative estimate, or when partial data isn't useful for your analysis.

Response Rate 2 (RR2) — Includes Partials

Formula:

RR2 = (Complete + Partial) ÷ (Complete + Partial + Refusals + Non-contacts + Unknown Eligibility)

Partial completions count as responses.

Use when: Partial data is still valuable for your analysis.

Response Rate 3 (RR3) — Estimates Unknown Eligibility

Formula:

RR3 = Complete ÷ (Complete + Partial + Refusals + Non-contacts + (e × Unknown))

Where e = estimated proportion of unknown cases that are eligible (based on the eligibility rate among known cases).

Use when: You have significant unknown eligibility (bounced emails, disconnected phones) and want to estimate rather than assume.

Response Rate 4, 5, 6

These follow similar patterns with variations in how partials and unknown eligibility are handled. RR4-6 include partials in the numerator with different eligibility estimation approaches.

Which Rate to Report?

For most business surveys, RR1 or RR2 is appropriate:

  • RR1 if you only analyze complete responses
  • RR2 if you include partial responses in analysis

For academic research or probability samples, RR3 or higher may be expected, with explicit eligibility estimation.

Always specify which calculation you used. "25% response rate" is meaningless without knowing the formula.

Handling Edge Cases

Partial Completions

A respondent who answers 15 of 20 questions: did they "respond"?

Options:

  1. Threshold approach: Define a completion threshold (e.g., 80% of questions answered). Above threshold = complete, below = partial.

  2. Key question approach: Define critical questions. If those are answered, count as complete regardless of total completion.

  3. Binary approach: Only 100% completion counts as complete. Everything else is partial.

Example:

  • 1,000 invitations sent
  • 200 completed 100% of questions
  • 75 completed 50-99% of questions
  • 25 completed less than 50%
Approach Numerator Response Rate
Binary (100% only) 200 20%
80% threshold 200 + ~50 = 250 25%
Include all partials 200 + 75 + 25 = 300 30%

The "right" answer depends on your analysis needs. Just be consistent and transparent.

Ineligible Contacts

Some people in your sample shouldn't be counted because they were never eligible:

  • Email bounces: Invalid addresses
  • Wrong numbers: Phone surveys reaching wrong person
  • Screened out: Didn't meet eligibility criteria (e.g., not a customer)
  • Deceased/moved: No longer reachable

These should be removed from the denominator:

Adjusted Response Rate = Responses ÷ (Total Sample - Ineligible)

Example:

  • 1,000 invitations sent
  • 100 emails bounced (ineligible)
  • 50 people screened out as non-customers (ineligible)
  • 200 completed the survey

Unadjusted: 200 ÷ 1,000 = 20%
Adjusted: 200 ÷ (1,000 - 100 - 50) = 200 ÷ 850 = 23.5%

The adjusted rate is more accurate—it reflects the rate among people who could have responded.

Unknown Eligibility

The hardest case: you don't know if someone was eligible.

  • Non-responding email addresses: Were they valid?
  • Unanswered phone calls: Was the number correct?
  • Survey abandonment before screening: Were they eligible?

AAPOR approach: Estimate eligibility rate from known cases.

If 90% of contacts with known status were eligible, assume 90% of unknown-status contacts were also eligible.

Formula:

Estimated Eligible Unknown = Unknown × (Eligible Known ÷ Total Known)

Example:

  • 1,000 invitations
  • 700 responded or confirmed ineligible (known status)
  • 300 never responded, status unknown
  • Of the 700 known: 650 eligible, 50 ineligible
  • Eligibility rate: 650 ÷ 700 = 92.9%
  • Estimated eligible among unknown: 300 × 0.929 = 279

Adjusted denominator: 650 (known eligible) + 279 (estimated eligible) = 929

Common Calculation Mistakes

Mistake 1: Counting Clicks as Responses

"500 people clicked the survey link" ≠ 500 responses.

Response rate should count completions (or meaningful partial completions), not survey starts. A 50% completion rate among starters means your actual response rate is half what click-through suggests.

Mistake 2: Ignoring Bounces

If 15% of your emails bounced, including them in the denominator understates your response rate by 15%.

Track deliverability separately and exclude undeliverable contacts from response rate calculations.

Mistake 3: Inconsistent Partial Rules

Counting partials as responses for one survey but not another makes rates incomparable.

Document your rules and apply them consistently across surveys.

Mistake 4: Confusing Response Rate with Completion Rate

  • Response rate: Responses ÷ Sample invited
  • Completion rate: Complete responses ÷ Responses started

A survey with 30% response rate and 80% completion rate means:

  • 30% of invited people started the survey
  • 80% of starters finished
  • True complete response rate: 30% × 80% = 24%

Mistake 5: Not Reporting the Formula

"Our response rate was 45%" tells me nothing if I don't know:

  • What counted as a response
  • What was in the denominator
  • How you handled partials and unknowns

Always specify your calculation method.

Response Rate vs. Data Quality

High response rate doesn't guarantee good data. Low response rate doesn't guarantee bad data.

What matters is non-response bias: whether non-respondents differ systematically from respondents.

Scenario A: 20% response rate, but non-respondents are randomly distributed. Your data represents the population reasonably well.

Scenario B: 60% response rate, but the 40% who didn't respond are all dissatisfied customers avoiding your survey. Your data is biased toward satisfaction.

Scenario A produces better data despite the lower rate.

Response rate is a diagnostic, not a quality score. Low rates warrant investigation into non-response bias. High rates don't guarantee its absence.

For more on this, see our guide on survey response rate benchmarks.

Quick Reference

The Formulas

Metric Formula
Basic Response Rate Complete ÷ Invited
Adjusted Response Rate Complete ÷ (Invited - Ineligible)
AAPOR RR1 Complete ÷ (Complete + Partial + Refusal + Non-contact + Unknown)
AAPOR RR2 (Complete + Partial) ÷ (Complete + Partial + Refusal + Non-contact + Unknown)
Completion Rate Complete ÷ Started

What to Report

When reporting response rate, include:

  1. The formula used
  2. How "complete" was defined
  3. How partials were handled
  4. How ineligibles were identified and excluded
  5. The raw numbers (not just the percentage)

Good: "We achieved a 28% response rate (RR1: 280 complete responses from 1,000 eligible contacts, excluding 150 bounced emails and 50 screened-out non-customers)."

Bad: "Our response rate was 28%."


Need to track response rates accurately?

Lensym provides real-time response tracking with automatic bounce detection and completion status monitoring.

→ Get Early Access


Related Reading: