The SEER Framework
For user-informed product development
Based on research with teams at:
User feedback leads to transformative products — if your product team is working in sync. But often, product managers, designers, and researchers each have their own disparate approach, tools, and timelines for collecting and acting on user feedback.
Below, you’ll walk through the SEER steps, key activities, and tools that you can put into action today to make user-driven decisions across the product development lifecycle like the world’s best product teams.
How to Use the SEER Framework
We created SEER to document the proven processes from tech’s best product teams in a framework that leverages both customer insights and the agile methodology behind building successful products. This framework is designed to bridge the cross-functional divide and bring the entire product development team together to help you build better products systematically.
Conversations with dozens of senior leaders and a community-wide survey informed the SEER framework so it’s built on how the best of the best do user-informed product development.
To use SEER, bring together your product team and map your current process, project plans, and trackers to the framework. See where you need to add steps into your planning, and explore the methods and deliverables you could adopt. Assign roles and responsibilities between PMs, designers, and researchers, and time bound each stage in your project plan template.
Try this in your next team meeting or offsite — it’s designed to spark conversation, innovate processes, and bring your team together to build products your users will love.
Analyze continuous data streams on user behavior and sentiment to uncover problems and opportunities
Half of teams surveyed closely monitor systems and metrics that passively inform them or “sense” what’s going on with the product and company. This helps identify high-impact research opportunities for more targeted research.
For example, if you notice a big increase in the number of product referrals, that’s a sign you should investigate what’s causing the change. Or, if you see a big drop in the conversion rate of free trials to paying customers, it’s time to launch research into that drop.
Research and product teams can work together to set expected targets for key metrics ahead of launch so you can identify if anything is amiss that you should investigate. It’s helpful to have a dashboard with all your key metrics on a daily, weekly, monthly basis and pull in data from some of the other passive systems, like CSAT and NPS survey, and support tickets. These continuous data streams should have transparency across all teams, from research to product to design.
Other helpful sources include app store reviews (App Store and Google Play), review platform scores (G2 and Trustpilot), social media, and online forums (Reddit).
Sensing can be used for discovering areas you want to explore, where you can build to have the highest impact, opportunities to optimize the experience of a product or feature, or identifying when there is a problem and you need to react right away.
Commonly used tools
If this is a well-defined problem, it’s something that’s alive, it’s an existing product, go look at your quant numbers because they'll help direct your qual research. You can look at your funnel, you can look at a certain change. You can say, ‘This conversion rate dropped by 10% this month, what’s up?’, and that’s a great opportunity for qual research to step in and answer those questions.
Develop context and nuance around the identified user need and problem
Once you’ve identified an area with a problem or big opportunity, you’re ready to conduct generative research to develop a contour of what you’re trying to solve or the optimization you’re looking to achieve by adding more context from your users themselves.
Exploration projects can be initiated by Sensing changes in metrics or a request from leadership to assess opportunities around a specific business objective, such as expanding upmarket to serve enterprise-sized companies. To justify the time and resources to conduct an Exploration project, you need a concrete business reason and buy-in across product, research, design, and exec teams.
Exploratory research uncovers the stories and journeys, details and nuances that matter to the end user. Together, the research team provides insights and the product team uses them to imagine solutions for their customers’ problems.
Qualitative methods, including user interviews, are the backbone of exploratory research, and 95% of survey respondents indicated they rely on interviews during this stage.
Commonly used tools
Generative research is not just about pushing boundaries and looking to some hypothetical future, it is about defining those very boundaries. By speaking with customers about their deepest imaginative needs, they help provide a critical structure around how far your product can push.
Test varied solution concepts to get conviction on the right direction
Even with a well-defined problem or opportunity, that doesn’t mean there’s only one way to solve it. In the Evaluate stage, product, research, and design teams should work together to narrow down the many possible solution concepts to one optimal direction to move forward with conviction.
To evaluate the different approaches offered by cross-functional product and design partners, researchers we surveyed use concept tests (59%) and interviews (62%).
These research activities typically use designed prototypes, where designs developed in Figma, InVision, or other prototyping tools simulate a product experience. This approach is less expensive compared to coding new features or concepts, and it allows for fast iterations and changes based on user input.
This evaluation helps you focus your understanding of your customer and further drill down on your Exploration findings. Ultimately, this means research can quickly de-risk potentially expensive product investments, especially if using unmoderated testing for feedback. It gives conviction to the product and design teams as they move forward.
Commonly used tools
Working with a prototype gives you a number of advantages — the primary one being around the cost of making change.With design prototypes, you can be nimble in making changes to an experience compared to engineered ones, where change can be costly.
Test developed prototypes to identify technical issues and user comprehension, as well as optimize the user experience
Once you have research-informed conviction in your team’s approach in the Evaluating stage, it’s time to move forward into working with engineered prototypes to reveal any technical issues, bugs, or issues with user experience In this stage, research’s work on usability testing helps validate product and design priorities as launch nears.
These engineered prototypes leverage real user data, allowing users to experience a product that is personalized to them, just as they would in everyday life. This means, if you considered a tool like Zillow, they would be able to actually search, see results, and save properties in the prototype. Users aren’t just looking at the design — they’re seeing it in action and are able to provide meaningful feedback.
This is where usability testing takes center stage as you evaluate whether the copy is clear, the navigation makes sense, and ensure the product is offering value for our users. You may even uncover hidden bugs.
If there’s anything to fix, now is the time. This is the final gut check before your product is ready to move to launch. This Refine stage gives you conviction that you’ve built a successful product, ready to be used by customers, or, if it’s an update, improve on what you had before.
Commonly used tools
In the refining stage, usability testing with the right users (whether that is your current users, churned users or potential users) is critical to have a deep understanding of the pain points and opportunities. This helps the product and design teams make clear changes that improve the user experience and result in greater user value/engagement.
Monitor stability, functionality, and quality to make sure the live product matches expectations
Of course, shipping your new product or feature isn’t the end of the learning journey, but it is the start of a new one.
Research while developing something new has limitations - there are only so many people you can talk to and only so many contexts you can investigate. At some point, you have to launch to a larger group of people and hope to quickly fix the issues that inevitably pop up. Beta programs are the perfect tool for the job.
You can think of Beta programs as a staged rollout of a new release, where more and more people are given access as your confidence in stability and quality rises. During a beta, you will want to monitor product analytics closely, send out regular surveys, and do the occasional interview as well.
You may start your beta with only a few dozen people, but as engagement and retention improve you can and should onboard larger and larger cohorts of users.
More than half of those surveyed have run a beta in the last year. A mix of tools like Hotjar for heatmaps, Intercom for live chat, and Slack for comms can make it easier to manage learning from a large group of people.
Ultimately, a Beta is a risk management system disguised as a research program. By the time it’s finished, you’ll have confidence you can ship to your entire customer base without breaking your product or confusing large segments of your users.
Commonly used tools
When requesting feedback from beta users, we recruit users who have engaged with the feature at least once. Sometimes we will use more fine-grained targeting to reach a particular group of beta users, for example users who have disabled the feature or engaged in a unique way. This allows us to customize questions based on engagement and can lead to deeper insights!
Join us for a live Q&A
February 8, 2023
11:00 PST / 2:00 PM EST
Hear from Sprig CEO Ryan Glasgow and Learners CEO Alec Levin about the findings behind the SEER framework.
The session will be recorded, so we encourage you to sign up even if you can’t attend.
This project was only possible because of the collaboration of these incredible individuals. They offered expert opinions, thoughtful support, and a window into their everyday work.
We thank them for making the product development more user-informed and the process more transparent and accessible for all.
|Brett Bejcek, Rewind.ai||Virginia Steindorf, Spotify||Sigal Vainapel, Autodesk|
|Shibani Shah, ServiceNow||Saakshi Yadav, ServiceNow||Kristin Walko, Gusto|
|Tyreek Houston, ConsenSys||Jennifer Fast, ServiceNow||Melissa Sanchez, Verizon|
|Zach McDonald, Dropbox||Julia Meriel, Scribd||Hannah Moyers, Amazon|
|Rachel Kim, Questrade||Carlos Hernandez, Nubank||Maral Elliot, ServiceNow|
|Carly Hatjes, Zoom||Amanda Gelb, Asana||Zoe Glas, Google|
|Bret Scofield, Notion||Alesha Arp, Brightcove|
Launch a Sprig in minutes. See insights within hours.
Get conviction around every product decision. Start with Sprig to collect user feedback across the product lifecycle, fast.