How much qualitative research is enough? How many interviews? How many surveys?
The exact numbers will vary, but suffice it to say that most product teams don’t conduct enough qualitative research. Product managers have busy schedules, and traditional qualitative research is long and laborious. So, if you’re like most PMs, you’ve adopted a “better than nothing” mentality. You’re happy with an occasional interview and a once-in-a-while survey because, hey, at least it’s better than nothing.
These days, PMs shouldn’t have to settle for crumbs, especially for something as valuable as rich user insights. The good news? The right qualitative research tools enable you to collect and analyze open-ended data on a large scale. And it can happen in a matter of hours — not weeks or months.
Scale matters, even for qualitative research
Talking to a few users is indeed better than nothing. But in qualitative research — just like quantitative — increasing your sample size helps you account for differing user opinions, spark new ideas, and, ultimately, build and ship the right thing.
In the PM community, the number five floats around as the magic number for qualitative research. The genesis of that number is an article from the Nielsen Norman Group called “Why You Only Need to Test with 5 Users.” It suggests that you can identify 85% of usability problems by running a test with only five subjects. This is true because usability testing is about catching bugs and imperfections. When you usability test, you’re not creating new insights — you’re trying to spot issues, which are objective and limited.
Somewhere down the line, the idea got misapplied to qualitative research at large (e.g., interviews, surveys, and other methods of hearing from your users). But let’s be clear: You will learn more from talking to six than you’ll learn from five, plain and simple. Unlike user testing, these methods seek to understand your user’s day-to-day experience — something that can’t be nailed down as easily as a product bug. User needs are complex and subjective and evolve alongside your product and the market. Simply put, you can never get to the bottom of the bag. If you have time, another interview will probably teach you something new.
Say your sample size was five users. That sixth user might offer a completely unique insight: a new pain point, a frustrating experience, or an original suggestion that ends up on your product road map. The smaller the sample size, the easier it is to accidentally spend time and resources building for an inaccurate user persona. If you interview just a couple of users, you might not get the full picture of your actual end-user. The more people you talk to, the easier it is to identify patterns, spark new ideas, and weed out bad ones.
So, talking to more users is a good thing. But the question remains: How can someone with a busy schedule do it?
In-product surveys collect qualitative data at scale
In order to make a qualitative research tool scalable, it must help you collect more responses without adding human effort. That’s where in-product surveys come in.
If you don’t know, in-product surveys are short questionnaires that pop up at key points in your product experience. They allow you to conduct real-time research with existing users as they experience your product.
Before, product teams had two main of methods of collecting open-ended insights: One-on-one interviews and traditional surveys.
One-on-one interviews are a gold mine: Nothing can compare to the depth of insight these long conversations can provide. However, that depth comes at the cost of scale. Let’s say that interviewing one user takes an hour. Interviewing three users takes three hours; five, five; ten, ten. And any product manager with ten spare hours is an anomaly.
The easy alternative, you think, is surveys. By sacrificing a little bit of depth, surveys allow you to hear from every single one of your users, so long as they respond. But that last part — so long as they respond— is the kicker. The response rate of a traditional email survey floats, on average, between 10% and 15%. It’s common for that number to dip below 2%, too.
The reason for that dismal rate? In a crowded inbox, a 20-minute survey is a big ask. When your users are clearing out their inbox on a Friday afternoon, filling out product surveys usually isn’t a high priority. You can remind them that filling out the survey is ultimately in their best interest because it helps you improve a product they use. But at the end of the day, they still see a 20-minute task. And, more often than not, they also see the trash icon and remember how nice inbox zero feels. If that trash icon is clicked by enough of your users, your “scalable” method is no longer all that scalable.
In-product surveys change the game by significantly decreasing the barriers to respond. They catch the user while they’re already engaged in your product (rather than competing for attention in a crowded inbox). This means users don’t have to task switch — or even open a new window — to share a response. (Also, since they’re closer to your product at the time of the survey, their feedback will be more specific and contextual.) The typical response rate for an in-product survey launched with Sprig is between 20% and 30%, with some up to 90%. With that steadier stream of insights, you’re well on your way to qualitative research at scale.
Automated text analysis analyzes qualitative data at scale
Once you collect a large volume of qualitative data, you need to be able to make sense of it. Until then, all you have is a scattered, disorganized mountain of thoughts. Automated text analysis helps you do just that — make sense of qualitative data at scale.
Before, that analysis was manual. The most common process is called thematic analysis; it requires tape, a pen, and a whole lot of time. You start by combing through open-ended responses and coding them with short, descriptive labels. Then, you dig through the coded responses in search of patterns and group similar responses together into themes. (That’s where the tape comes in — until now, the fastest method was to physically cut up responses and tape onto a wall in groups.) Once the data is organized into themes, it’s up to you to apply those themes in your product decision-making.
Depending on the amount of data, this process can take weeks. But automating the process cuts that work down to a matter of minutes. Essentially, artificial intelligence combs through mountains of open-ended text at lightning speed. It recognizes patterns (even without directly overlapping words and phrases) and generates themes, making the data more organized and digestible.
Most automated text analysis software presents those themes in something like a word cloud: A visual, digestible representation of your data. This saves you time but doesn’t always help you understand what to do next. That’s why Sprig’s automated text analysis feature offers themes in the form of specific, actionable suggestions.
Use a continuous qualitative research tool to gain deep insight at scale
Product managers don’t just need to talk to a lot of users — they need to do so quickly.
In the old days, smaller and slower user research made sense — products simply didn’t change as rapidly as they do now. But today, agile product teams live in a fast-revolving door of design, evaluation, and iteration. With traditional user research, your team’s priorities shift a dozen times by the time you hold interviews, collect surveys, and analyze the results.
That’s the inspiration behind Continuous Research. It’s a method that leverages features like in-product surveys and automated text analysis to generate a continuous stream of actionable insights on the fly. Product teams can hear from users in real-time, and those users are always responding to the latest product version.