Est. reading time: 6 min read

5 Survey Questions You Should Never Ask

survey designquestion designsurvey mistakesbest practices

These common survey questions look fine but systematically destroy your data. Here's what's wrong with each and what to ask instead.

5 Survey Questions You Should Never Ask

Some questions look perfectly reasonable until you try to use the data.

Not all bad survey questions are obviously bad. The worst ones seem fine. They get approved by stakeholders. They don't confuse respondents. They collect answers that look like data.

Then you try to make a decision with that data and realize you have nothing.

These five questions appear constantly in surveys. They're all broken in specific, predictable ways. Stop using them.

1. "How satisfied are you with our product/service?"

What's wrong: This question is too vague to be actionable.

When someone says they're "satisfied," what does that mean? Satisfied with what, specifically? The product itself? The price? Customer support? The onboarding experience? One feature they love that compensates for three they hate?

You get a number. The number goes up or down. You have no idea why, and therefore no idea what to do about it.

We've watched teams celebrate satisfaction scores improving from 7.2 to 7.5 with zero understanding of what caused the change. Was it a product update? A pricing change? Seasonality? They had no way to know.

Ask instead:

  • "How satisfied are you with [specific aspect]?" (product quality, support response time, ease of use)
  • Use a driver analysis: multiple specific questions that predict overall satisfaction
  • Follow satisfaction questions with "What's the primary reason for your rating?"

Vague questions produce vague data. If you can't specify what you're measuring, you're not measuring anything.

2. "Do you agree or disagree: [statement]?"

What's wrong: Agree/disagree questions trigger acquiescence bias. People tend to agree with statements regardless of content.

This isn't a small effect. Studies show 10-15% inflation in agreement rates compared to balanced question formats. That's enough to completely flip your conclusions.

It gets worse. Acquiescence bias is stronger among:

  • Less educated respondents
  • Respondents taking the survey in a non-native language
  • Fatigued respondents (later in the survey)
  • Respondents who want to be polite

So your data is biased, and the bias isn't evenly distributed across your sample.

Example of the problem:

These two statements are contradictory:

  • "This company values work-life balance" (74% agree)
  • "This company prioritizes productivity over employee wellbeing" (68% agree)

Both can't be true. Acquiescence is why both get high agreement.

Ask instead:

Use forced-choice formats:

  • "Which better describes this company: values work-life balance OR prioritizes productivity over wellbeing?"

Or use behavioral questions:

  • "In the past month, how often have you worked more than 8 hours in a day?"

Forced choices eliminate the agree-with-everything pattern. Behavioral questions measure actions, not attitudes people perform for surveys.

3. "On a scale of 1-10, how likely are you to recommend us?"

What's wrong: This is NPS (Net Promoter Score), and the question itself is fine. The problem is treating the result as meaningful without context.

NPS has become a vanity metric. Teams track it obsessively, celebrate when it goes up, panic when it goes down, and rarely connect it to actual business outcomes.

The number means nothing without knowing:

  • Why people gave that score
  • Whether they actually recommend you (behavior ≠ stated intent)
  • What would change their likelihood to recommend

A 45 NPS could be amazing or terrible depending on your industry, segment, and competitive context. Benchmarking NPS across different businesses is mostly meaningless.

The deeper problem: "Would you recommend?" is a hypothetical. People are bad at predicting their own future behavior. Someone who says "definitely would recommend" might never actually do it.

Ask instead:

If you're going to use NPS, always pair it with:

  • "What's the primary reason for your score?" (open-ended)
  • "Have you actually recommended us to anyone in the past 6 months?" (behavioral)

Better yet, measure actual referral behavior if you can. Stated intent ≠ action.

4. "How often do you [behavior]?"

What's wrong: This question relies on recall, and human memory is systematically biased.

People overestimate frequency of "good" behaviors (exercising, reading) and underestimate "bad" behaviors (snacking, screen time). They remember recent events more vividly than distant ones. They reconstruct memories to fit their self-image.

Asking "How often do you use our product?" will give you inflated numbers because:

  • Respondents who use it more are more likely to respond (survivorship)
  • People overestimate frequency of things they value
  • Recent usage is weighted too heavily in recall

Ask instead:

Anchor to specific, recent timeframes:

  • "In the past 7 days, how many times did you [behavior]?"
  • "When did you last [behavior]?" (yesterday, 2-3 days ago, past week, etc.)

Or use behavioral prompts:

  • "Think about yesterday specifically. Did you [behavior]?"

Shorter recall windows produce more accurate data. "Ever" and "usually" are too vague to measure reliably.

5. "Are you satisfied with your purchase AND would you buy again?"

What's wrong: This is a double-barreled question. It asks two things at once.

A respondent might be satisfied with their purchase but wouldn't buy again because their needs changed. Or they might be unsatisfied but would buy again because there's no alternative. They can only give one answer to two questions.

Double-barreled questions produce invalid data. You don't know which part of the question they're responding to.

This sounds obvious, but double-barreled questions sneak in constantly:

  • "Is our product easy to use and affordable?"
  • "How satisfied are you with our shipping speed and packaging?"
  • "Do you find our website helpful and easy to navigate?"

Ask instead:

One question, one concept:

  • "How satisfied are you with your purchase?"
  • "How likely are you to purchase from us again?"

Then analyze the relationship between the two. That's far more useful than a single muddled answer.


The Pattern

These questions fail for the same reason: they look like they're measuring something, but they're not.

  • Vague satisfaction questions measure nothing specific
  • Agree/disagree questions measure agreeableness, not attitudes
  • Decontextualized NPS measures a number, not insight
  • Frequency questions measure memory bias, not behavior
  • Double-barreled questions measure... something, who knows what

Good survey questions are specific, behavioral, unbiased, and single-focus. If a question doesn't meet all four criteria, it's collecting noise that looks like signal.

That's worse than collecting nothing at all.


Want help designing questions that actually work?

Lensym's survey editor helps you build clear, unbiased surveys with built-in best practices.

→ Get Early Access


About the Author
The Lensym Team has reviewed hundreds of surveys and seen these mistakes in almost all of them. We're mildly obsessed with question design.