Solutions
Experience measurement
Track sentiment and KPIs with AI-driven gap analysis
Concept & prototype testing
Test designs and prototypes with rapid feedback
Journey & behavioral research
Connect user actions to motivations across the lifecycle
Strategic & foundational discovery
Uncover market whitespace with AI-led foundational studies
Agents
Design
Structure rigorous studies
Field
Run adaptive studies at scale
Synthesize
Turn results into research reports
Deploy
Email, link, and panels
Distribute studies to external audiences
Web apps and websites
Embed studies in web experiences
Mobile apps
Embeds studies in iOS and Android apps
Customers
Community
Events
Join curated gatherings shaping the future of research
Blog
Insights on AI, research, and product experience
Pricing
Sign in
Book a demo
Sign in
Book a demo
Blogarrow icon
arrow icon
Analysis at scale: How Sprig leverages AI to supercharge user research

Analysis at scale: How Sprig leverages AI to supercharge user research

Written by Kevin Mandich | Jul 09, 2021

July 9, 2021

Analysis at scale: How Sprig leverages AI to supercharge user research

Thematic Clustering as a Research Goal

One of the richest sources of customer experience data is an open-text survey response. Historically this is also one of the most difficult data formats from which to extract meaningful insights, especially at scale.

When user researchers run surveys with open-text questions, a common goal is to group the huge number of responses into a small number of bite-size and actionable take-aways. The identified themes are shared with product stakeholders and play a critical role in determining how to improve the product experience. An example with some responses and a summarizing theme could be:

Response: “I’m lost, can’t really find anything easily in the product”

Response: “It’d be nice if there was a way to find users by name”

Response: “Please add a search field”

Theme: “Add search functionality”

Performed manually, this analysis takes the form of placing the responses into a large spreadsheet, reading through them to locate patterns, defining themes that represent actionable groups of responses, and then assigning all responses to one or more of these themes (a.k.a. “coding”).

As you can imagine, this is a painstaking process and certainly can’t scale easily beyond a few hundred responses. Automating this process can be a powerful way to increase the leverage of researchers and bring the survey life cycle from weeks down to hours. The ability to do this accurately is also one of the key differentiators between Sprig and other customer survey tools.

Existing Attempts at Automation

Many methods of grouping customer experience data exist in the industry, but they tend to lack the nuance required by product researchers and stakeholders to make informed decisions. Examples of existing methods include, in order of increasing complexity:

  1. Word and phrase counts and string matching
  2. Topic extraction and modeling
  3. Keyword extraction & post-processing
  4. Similarity matching of keywords (e.g. using a thesaurus, or maybe a neural network)

These natural language processing (NLP) methods are useful for a surface-level analysis, but they all have shortcomings. Take the following three sample responses received from users of a fictional web service:

“The subscription fee is a bit steep”

“The cost to change vendors might be too much for us at the moment”

“You should include a pricing plan based on usage”

The topic shared by all three responses is cost. However, none of the words associated with cost are the same, nor are the surrounding phrases. This eliminates word frequency counts, string matching, and keyword extraction as possible techniques, even if stemming or lemmatization techniques are used. Moreover, the intent of each response is different:

  • Response 1 is complaining about the subscription costs
  • Response 2 is referring to the switching costs of adopting a new tool, which includes time, effort, and other resources in addition to just monetary value
  • Response 3 is requesting a different subscription cost tier

Even if more sophisticated techniques such as topic modeling or neural network-based similarity scoring are used, grouping these three responses together would not make sense from a product perspective, and could adversely affect the decision-making process resulting from such an analysis.

How We Do It at Sprig

We employ a state-of-the-art approach to capture the nuance of themes seen in open-text survey responses. Instead of just counting words or identifying a topic, we also consider the context of the survey. This method is critical for capturing the user's intent in discussing the topic.

The examples in the previous section show how our approach disambiguates responses with similar topics but different meanings. How about situations where it's impossible to determine the user's intent from the response alone?

Take the example response: "User search." If the question asked was "What do you like about our product?" then the respondent is praising this functionality. However, that same response to the question "What could we improve?" implies that this functionality is lacking. Considering the question helps reduce the ambiguity that is common in survey responses.

We do this by utilizing deep neural networks, which capture the complexities of natural language. Our models, trained on millions of survey response data points, capture context from the entire response and the question asked to provide a complete picture of the respondent's meaning.

The End Result

All themes should have a topic and an intent to provide a clear and actionable takeaway. It's also important to identify an element of emotional response - sentiment - and a recommendation based on the urgency of the theme's responses. Using advanced machine learning techniques, Sprig generates an AI analysis for you quickly, accurately, and at scale.

Sign up for our newsletter

Actionable insights on faster research and better experiences, straight to your inbox.

linkedin icontwitter aka X icon

Written by

Analysis at scale: How Sprig leverages AI to supercharge user research

Kevin Mandich

Engineering PhD turned AI expert. Previously at Agari and Incubit. Has over six years of industry experience building state-of-the-art machine learning applications. UCSD alum.

Related Articles

No items found.
Solutions
Experience measurementConcept & prototype testingJourney & behavioral researchStrategic & foundational discovery
Agents
DesignFieldSynthesize
Deploy
Email, link, and panelsWeb apps and websitesMobile app
Pricing
Community
EventsBlog
Customers
Company
About usCareersService agreementPrivacy policyData addendumSystem status
Socials
LinkedInX