Thematic Clustering as a Research Goal
One of the richest sources of customer experience data is an open-text survey response. Historically this is also one of the most difficult data formats from which to extract meaningful insights, especially at scale.
When user researchers run surveys with open-text questions, a common goal is to group the huge number of responses into a small number of bite-size and actionable take-aways. The identified themes are shared with product stakeholders and play a critical role in determining how to improve the product experience. An example with some responses and a summarizing theme could be:
Response: “I’m lost, can’t really find anything easily in the product”
Response: “It’d be nice if there was a way to find users by name”
Response: “Please add a search field”
Theme: “Add search functionality”
Performed manually, this analysis takes the form of placing the responses into a large spreadsheet, reading through them to locate patterns, defining themes that represent actionable groups of responses, and then assigning all responses to one or more of these themes (a.k.a. “coding”).
As you can imagine, this is a painstaking process and certainly can’t scale easily beyond a few hundred responses. Automating this process can be a powerful way to increase the leverage of researchers and bring the survey life cycle from weeks down to hours. The ability to do this accurately is also one of the key differentiators between Sprig and other customer survey tools.
Existing Attempts at Automation
Many methods of grouping customer experience data exist in the industry, but they tend to lack the nuance required by product researchers and stakeholders to make informed decisions. Examples of existing methods include, in order of increasing complexity:
- Word and phrase counts and string matching
- Topic extraction and modeling
- Keyword extraction & post-processing
- Similarity matching of keywords (e.g. using a thesaurus, or maybe a neural network)
These natural language processing (NLP) methods are useful for a surface-level analysis, but they all have shortcomings. Take the following three sample responses received from users of a fictional web service:
“The subscription fee is a bit steep”
“The cost to change vendors might be too much for us at the moment”
“You should include a pricing plan based on usage”
The topic shared by all three responses is cost. However, none of the words associated with cost are the same, nor are the surrounding phrases. This eliminates word frequency counts, string matching, and keyword extraction as possible techniques, even if stemming or lemmatization techniques are used. Moreover, the intent of each response is different:
- Response 1 is complaining about the subscription costs
- Response 2 is referring to the switching costs of adopting a new tool, which includes time, effort, and other resources in addition to just monetary value
- Response 3 is requesting a different subscription cost tier
Even if more sophisticated techniques such as topic modeling or neural network-based similarity scoring are used, grouping these three responses together would not make sense from a product perspective, and could adversely affect the decision-making process resulting from such an analysis.
How We Do It at Sprig
We employ a state-of-the-art approach to capture the nuance of themes seen in open-text survey responses. Instead of just counting words or identifying a topic, we also consider the context of the survey. This method is critical for capturing the user's intent in discussing the topic.
The examples in the previous section show how our approach disambiguates responses with similar topics but different meanings. How about situations where it's impossible to determine the user's intent from the response alone?
Take the example response: "User search." If the question asked was "What do you like about our product?" then the respondent is praising this functionality. However, that same response to the question "What could we improve?" implies that this functionality is lacking. Considering the question helps reduce the ambiguity that is common in survey responses.
We do this by utilizing deep neural networks, which capture the complexities of natural language. Our models, trained on millions of survey response data points, capture context from the entire response and the question asked to provide a complete picture of the respondent's meaning.
The End Result
All themes should have a topic and an intent to provide a clear and actionable takeaway. It's also important to identify an element of emotional response - sentiment - and a recommendation based on the urgency of the theme's responses. Using advanced machine learning techniques, Sprig generates an AI analysis for you quickly, accurately, and at scale.