navy logo
Products
PRODUCTS
survey icon
In-Product Surveys
Capture targeted user insights right in your product
replays icon
Replays
Recreate and optimize user journeys across your product
teal icon of a survey with chapters
Long-Form Surveys
Measure UX at scale with advanced link surveys and AI analysis.
heatmaps icon
Heatmaps
Visualize user behavior in your product at scale
feedback icon
Feedback
Collect continuous user feedback at scale
ai recommendations icon
AI Insights
NEW
Sprig AI generates actionable product solutions
Features
integrations
Integrations
mobile icon
Mobile
star icon
AI Analysis
magic pencil icon
AI Study Creator
dashboards icon
Dashboards
Solutions
by Use Case
continuously optimize icon
Continuously optimize
Analyze your users’ experience through core flows
solve pain points icon
Solve pain points
Uncover emerging trends in your users’ behavior
improve conversion icon
Improve conversion
Understand why and how users are dropping off
save time and resources icon
Save time & resources
Know which new features are worth the investment
by TEAM
uxr icon
User Research
Maximize the speed and impact of your research
Design
Validate and get buy-in for designs with real user insights
pm icon
Product Management
marketing
Marketing
code icon
Engineering
star icon
Customer Experience
Templates
lenny template
Survey
Develop Product Sense to Build Great Products
lenny headshot
Lenny Rachitsky
View Template
arrow icon
feedback template
Feedback
Continuously Collect Product Feedback
favicon
Sprig
View Template
arrow icon
Optimize New Features
Replay
Optimize New Features to Enhance the User Experience
favicon
Sprig
View Template
arrow icon
templates
Template Gallery
Discover how our team and community use Sprig templates to inform product development.
View All
arrow icon
Customers
square nav photosquare left logo
Square uncovered 100+ actionable insights within the first 6 months
Read Now
arrow icon
ramp nav imageramp logo
Ramp created customer-centric products with Sprig AI
Read Now
arrow icon
classpass nav photoclasspass left logo
ClassPass improved usability and retention by optimizing core user journeys
Read Now
arrow icon
users icon
Meet our Customers
Learn how top companies leverage Sprig user insights to boost conversion, reduce churn, improve onboarding, and more.
View All
arrow icon
Resources
blog icon
Blog
Get expert advice on capturing product experience insights
event icon
Events & Webinars
Learn from past Sprig events & register for upcoming ones
help center icon
Help Center
Explore our knowledge hub to get started
in Sprig
video tutorial icon
Video Tutorials
Get a crash course in Sprig with our guided
video series
AI replay announcement text with a dashboard showing AI insights
New: AI-Powered Always-On Replays
Read Now
arrow icon
EnterprisePricing
Sign In
Book a Demo
navy logo
hamburger menu iconclose icon
Products
caret icon
Products
survey icon
In-Product Surveys
teal icon of a survey with chapters
Long-Form Surveys
feedback icon
Feedback
replays icon
Replays
heatmaps icon
Heatmaps
ai recommendations icon
AI Insights
Features
integrations
Integrations
mobile icon
Mobile
star icon
AI Analysis
magic pencil icon
AI Study Creator
dashboards icon
Dashboards
Solutions
caret icon
By Use case
continuously optimize icon
Continuously optimize your product & website
solve pain points icon
Surface & solve pain points
improve conversion icon
Improve conversion rates
save time and resources icon
Save engineering time & resources
By TEAM
uxr icon
User Research
Design
pm icon
Product Management
marketing
Marketing
code icon
Engineering
star icon
Customer Experience
Templates
Customers
Resources
caret icon
blog icon
Blog
event icon
Events & Webinars
help center icon
Help Center
video tutorial icon
Video Tutorials
Enterprise
Pricing
Sign InGet Started Free
Blogarrow icon
Guides
arrow icon
A Practical Guide to Avoiding Survey Bias
Guides

A Practical Guide to Avoiding Survey Bias

Written by Allison Dickin | Nov 23, 2020

November 23, 2020

A Practical Guide to Avoiding Survey Bias

When speaking with people about surveys, I get more questions about preventing survey bias than anything else. While there are a lot of articles out there that name the different types of bias, few of them give practical advice on how to prevent bias from influencing survey results, so that’s what this article is intended to do.

Survey bias is the tendency for survey participants to respond inaccurately or not-exactly-truthfully to questions, often (but not always) unintentionally. There are lots of recall bias can creep into surveys, some of which can be controlled by following best practices and some of which cannot. You can control the degree of bias in your survey by making considered choices about who you survey, how you survey them, and how you design your survey questions. Since this article is part of a series about writing surveys (see the first article here), I’ll focus on the types of bias that can be controlled through well-designed survey questions.

Rule #1: Be Switzerland

One of the most straightforward ways of preventing survey bias is to keep your question wording neutral. Be Switzerland.

Often, in an attempt to inject excitement into surveys, we accidentally nudge participants in a particular direction. While it is important for survey questions to feel human and friendly, this should not be done at the expense of high-quality results. Try to keep question-wording neutral and only provide the information participants will need to answer them.

Here are some examples of questions that could create bias, and how to fix them:

  • “We love hearing about great experiences. How would you rate your experience?” In this question, the first sentence makes it a little too clear what the questioner is hoping to hear. This approach may bias people towards giving a more positive rating or discourage people with negative experiences from answering at all. It would be better to leave the first sentence out altogether, even if you lose some of the brand glow from the survey.
  • “How would you rate your experience on our new and improved website?” Here, the question injects bias by stating that the website is improved, a presumption that is likely to lead participants to respond more positively than they would otherwise. A better approach would be to leave out the words, ‘and improved’ from the question. Even better, do a pre/post-test: Before releasing the new website, ask visitors to rate their experience with the current (old) website. Then, after you launch the new site, ask the exact same question to visitors on the new site and compare the results.

Rule #2: Mind Your Answer Scales

If the first way to prevent bias in surveys is by keeping your questions neutral, the second way is by keeping your answer scales neutral (or, at least, balanced). You want to make sure that your scales give as much weight to the positive side as the negative side, so that your participants will have an option available to them, regardless of their experience.

Setting aside the strangeness of the question (has *anyone* ever had an ‘absolutely outstanding’ experience with an advertisement?), the answer scale is extremely unbalanced, with 4 answer choices framed in the positive, and only 1 framed negatively. If you’re looking to put your thumb on the scale for your report back to leadership, then this might be a way to do it (haha, but really: don’t). However, if you want to accurately gauge the user experience, this is a terrible method of doing so.

In this case, a more traditional scale would look one of two ways:

  1. A bipolar scale would have two positive responses, two negative responses, and one neutral (e.g., Excellent, Good, Okay, Bad, Terrible)
  2. A unipolar scale would use the same word or phrase to describe the experience across the scale, but covers the full range of feelings with descriptive adverbs (e.g., Extremely good, Very good, Somewhat good, Not very good, Not at all good)

To take a more typical example, if you were creating a scale for users to respond to a question about how satisfied they are with your product or service, your scale options would look like this

  1. Bipolar: Very satisfied, Somewhat satisfied, Neutral, Somewhat dissatisfied, Very dissatisfied
  2. Unipolar: Extremely satisfied, Very satisfied, Somewhat satisfied, Not very satisfied, Not at all satisfied

Both bi-polar and unipolar scales are acceptable, so it’s really up to you to choose which version to use. The one key to this is to use the same type of scale consistently throughout your survey (as much as possible).

Rule #3: Keep Your Cards Close to Your Vest

Another way we can unintentionally bias survey questions is by being a little *too* straightforward about what we want to learn. It’s important to provide users with enough context to answer your questions, but beyond that, the less they know about your intentions, the better. This is true for any survey question, but it’s especially relevant in two cases:

(a) for screener questions (questions you ask at the beginning of a survey to determine what questions to show later on, or whether someone qualifies for the full survey), and

(b) for surveys where you are offering participants an incentive for responding (which could encourage them to try to game the system to make sure they qualify).

A simple example of how bias can creep in by providing information is a basic awareness question. Maybe you want to know whether users are aware of a particular feature your product offers, so you ask them a simple yes/no question: “Are you aware that Sprig automatically analyzes open-text responses for you?” On the surface, there’s nothing obviously wrong with this question, but if this is the first question in your survey, you’ve probably biased the responses.

Why? Since you’ve called out this one specific feature, you’re likely to end up with a sample that over-represents people who are either familiar with the feature or interested in it. Others are more likely to assume the survey isn’t relevant to them and move on without answering. The consequence is that you’ll probably end up with inflated awareness numbers, and a more positive picture of user sentiment about this feature than a more representative sample would provide (i.e., you’re not just biasing the results to this one question, but likely skewing the overall participant pool and biasing the rest of your results as well).

The solution to this problem is called ‘blinding,’ and simply requires you to ask about participants’ awareness of several features without giving away which one(s) you are really interested in until later in the survey. In this case, you might ask, “Which of these Sprig features are you aware of? Select all that apply.” (Automated open text analysis, In-product surveys, Survey template gallery, Event-based survey targeting, None of these). By asking the question this way, you’ll probably get lower (and likely more realistic) awareness numbers, and your responses to other questions will be more in line with your overall user base than if you had moved ahead without blinding your question.

Rule #4: Be mindful of question order

The order you ask your questions can have an effect on the responses to your survey, either by giving participants information they otherwise wouldn’t have had or prompting them to shift their mindset.

For example, let’s say you want to know how people rate your product and what can be improved about it. If you ask them what you can do to improve before you ask for their ratings, you are likely to get lower ratings, because you’ve just prompted users to think about all the things they don’t like about your product. The same would be true in the opposite scenario: if you ask customers what they like best about your product before asking them to rate it, you’re going to get better ratings.

To prevent bias from question order, use a funnel approach, in which you ask any general/overall questions first before digging into the details with more specific questions. It’s also helpful to do a full review once you’ve completed your survey and consider whether questions at the beginning could bias questions later on.

Sometimes you’ll find that whatever order you choose, you’re risking biasing some questions with other questions. In that situation, prioritize your most critical questions first in survey order, and move lower priority questions to later in your survey. Or, consider breaking it out into two different surveys with different participants.

Rule #5: Check Your Assumptions

It’s critical to keep in mind that when writing survey questions, you are operating with your own biases and assumptions that you may not be consciously aware of, and that these biases can easily bleed into the survey itself. For this reason, a key step when drafting surveys is trying to figure out whether you’ve injected your own bias into your survey, and if so, removing it.

How do you do this? It can be difficult to simply stop what you’re doing and identify your biases, but it gets a little easier once you’ve got something down on paper. Take a look at each question you’ve written and ask yourself whether it’s making any assumptions or taking anything for granted.

For example, maybe you have a few potential new product features in mind and you want to know which to build first. You write the question, “Which of the following features would be most valuable to you? (a ‘Favorites’ option to keep track of items you love, a ‘Save for Later’ option in your shopping cart, a ‘Share’ option that lets you share items with others)”

Notice any assumptions? Unless you have data from another source (which is entirely possible), you’ve assumed that at least some of these features would be valuable to users. But what if none of them are? You’ve asked them the question, so they’re going to give you an answer, and you could take action based on the results. But you may be missing something critical.

One possible solution would be to include a ‘None of these’ option, or an option for users to select ‘Other’ and write in what they would most like you to focus on. Another solution would be to ask users to rate each feature on a scale from “Not at all valuable” to “Extremely valuable” instead. This would give you an overall ranking as well as a sense of the perceived value of each feature.

While this is probably the most nebulous step in preventing survey bias, it is possibly the most important one, so don’t leave it out. It will get easier the more you do it, I promise!

Summing it up

I hope this article gave you some practical tips for preventing bias when you’re writing your own surveys. I’d love to hear what you thought, what other questions you have, or what else you’d like me to write about.

Sign up for our newsletter

Get the best content on user insights, design, and product management delivered to your inbox every week.

linkedin icontwitter icon

Written by

Allison Dickin

Survey fanatic and customer experience advocate. Former Research Director at Yale School of Management and Senior Research Manager at Etsy. Bucknell and University of Chicago alum.

Related Articles

8 Product Management Tools for the Customer-Obsessed PM
Guides
Dec 20, 2024

8 Product Management Tools for the Customer-Obsessed PM

7 Best Product Feedback Tools for 2025
Guides
Dec 19, 2024

7 Best Product Feedback Tools for 2025

7 Best Product Adoption Software in 2025
Guides
Dec 17, 2024

7 Best Product Adoption Software in 2025

white sprig logo
Products
In-Product Surveys
Long-Form Surveys
Feedback
Replays
Heatmaps
AI Insights
Features
Integrations
Mobile
AI Study Creator
Dashboards
AI Analysis
Security Standards
Solutions
BY use case
Continuously Optimize
Improve Conversion
Solve Pain Points
Save Time & Resources
BY TEAM
User Research
Design
Product Management
Marketing
Engineering
Customer Experience
Templates
Customers
Resources
Blog
Events & Webinars
Help Center
Video Tutorials
Session Replay Guide
Pricing
Enterprise
Company
About Us
Careers
Sprig Service Agreement
Privacy Policy
Data Processing Addendum
Status
Compare
vs Qualtrics
vs Fullstory
vs Hotjar
vs Medallia
vs Pendo
Copyright 2025 Sprig, All Rights Reserved
linedkin logotwitter logo