Some survey answers are true, widely agreed on, and still completely unhelpful. They create the illusion of clarity while leaving teams stuck. Here’s how we ran into that problem with an in-product survey and what changed once we stopped asking “why.”
The Power (and Cost) of In-Product Surveys
In-product surveys, sometimes called intercept surveys, are a powerful tool often used for getting reliable, real-time customer feedback. Since these surveys “intercept” a user while they are trying to do something, the feedback is top of mind, easier to recall, and ultimately more relevant for the researcher. For example, if I’m building a cart on an e-commerce website and remove something from that cart, and a survey pops up asking why I deleted the item, I can easily recall my reasoning since it happened just seconds ago. Now imagine that same clothing store emails me three weeks later asking why I removed a specific item from my cart three weeks earlier. I can almost guarantee my recall won’t be as dependable. Because the in-product survey caught me in the moment, the feedback was easier to recall and more reliable.
At the same time, in-product surveys are expensive, or “high stakes.” They introduce friction into the user flow and can be actively frustrating if overused. Imagine visiting a website and being bombarded with survey pop-ups that interrupt every step of your flow. What should be a simple, uninterrupted experience, turns into swatting at “survey flies” just to stay on track.
Designing For Low Lift, Not Perfect Answers
Because of this, in-product surveys need to be designed carefully. The goal is not perfect nuance, but a balance between collecting useful user feedback and making the survey easy for your users to digest. Here are a few best practices to help strike that balance.
Keep the survey short:
In-product surveys work best when they’re brief. Instead of asking 20-plus questions, focus on two or three that let you get in and out quickly, so the customer can get back to what they came to do. Every additional question increases the chance someone abandons the survey entirely.
Save open-ends for last
Multiple-choice questions are the lowest lift for participants. They are quick to answer and require little effort beyond selecting from a set of options. Open-ended questions are incredibly valuable for understanding someone’s “why,” but they ask more of the person responding. If you include an open-ended question, make it the final one to minimize drop-off.
Write for scannability, not precision
It’s not uncommon for teams to try to pack as much as possible into in-product surveys precisely because they’re used so sparingly. But long prompts with dense text can quickly overwhelm users, especially when they are not expecting to take a survey. Questions should be readable, clear, and easy to digest at a glance.
The same principle applies to answer choices. Keep them concise, ideally to a single line, and limit the number of options to five or fewer. I’ve had PMs suggest survey questions with so many answer choices that they nearly fill an entire screen. The goal is not perfect nuance, but a balance between collecting useful feedback and making the survey easy to digest.
Case Study: A Question That Worked, Until It Didn’t
One of the most common reasons teams use in-product surveys is to understand what’s bringing someone to a product in the moment, so they can respond more effectively. A simple “What brings you to our page today?” is often a good place to start.
While working at a travel booking and guidance company, teams were constantly trying to answer that question. We surfaced a multiple-choice “What brings you here today?” survey on a handful of key pages, with answer choices grounded in past learnings. Because this was an in-product survey, those options needed to be short, scannable, and easy to process in the moment.
Over and over again, people told us they were coming to the site to “do research on a destination,” an answer that was technically true but still left us stuck. We knew that “researching a destination” meant very different things depending on where someone was in their travel planning, but capturing that nuance cleanly wasn’t feasible in an in-product survey.
When Motivation Stops Moving Decisions
At this point, we realized the problem wasn’t the survey itself, but the question we were asking. We’d gotten as much feedback as we could from asking why people were coming to the site, but we hadn’t asked where they were in their travel planning when they arrived. Instead of trying to use a motivation-based question to explain behavior, we shifted to classification questions that helped us understand where someone was in their planning journey.
We started by asking: “Are you planning a trip to this destination?” with three options:
- Yes, I have a trip planned
- Unsure, I am considering it
- No, I am not planning on traveling here.
- Other
For travelers who answered yes, we followed up with a couple of low-effort questions, like “Do you know your travel dates?” These questions weren’t designed to capture nuance or emotion; their purpose was much simpler: to clearly label where the traveler was in their planning journey.
We already had a strong understanding of how travelers needs change over the course of planning a trip. Someone with set travel dates is often further along in their planning journey and doesn’t need to be “sold” on the destination. They were more focused on planning their days, whether this meant viewing sample itineraries or exploring tours to book ahead of time. Once we had this layer of classification in place, the same on-page behavior became much easier to interpret, and we could prioritize content based on where someone actually was in their planning.
Turning True Answers Into Decisions
The impact was immediate. Response rates were strong, likely because the questions were quick and easy to answer. More importantly, we finally had orientation. Roughly half of travelers told us they were actively planning a trip, while about a third were still undecided.
With that clarity in place, teams could make more confident tradeoffs, like prioritizing bookable activities for people actively planning a trip while surfacing comparison content and guidance for those still deciding.
That’s the real takeaway. In-product surveys don’t always need to explain behavior to be useful. When motivation questions stall out, using surveys to classify where someone is can make the data you already have far more actionable.