Solutions
Experience measurement
Track sentiment and KPIs with AI-driven gap analysis
Concept & prototype testing
Test designs and prototypes with rapid feedback
Journey & behavioral research
Connect user actions to motivations across the lifecycle
Strategic & foundational discovery
Uncover market whitespace with AI-led foundational studies
Agents
Design
Structure rigorous studies
Field
Run adaptive studies at scale.
Synthesize
Turn results into research reports
Deploy
Email, link, and panels
Distribute studies to external audiences
Web apps and websites
Embed studies in web experiences
Mobile apps
Embeds studies in iOS and Android apps
Customers
Community
Events
Join curated gatherings shaping the future of research
Blog
Insights on AI, research, and product experience
Pricing
Sign in
Book a demo
Sign in
Book a demo
Blogarrow icon
Thought Leadership
arrow icon
A Research Prompt Framework that Fixes Generic AI Responses
Thought Leadership

A Research Prompt Framework that Fixes Generic AI Responses

Written by Anat Mooreville | Jul 30, 2025

July 30, 2025

A Research Prompt Framework that Fixes Generic AI Responses

How do you use AI in your end-to-end workflow? It’s still murky waters out there, and if you ask five different researchers this question, you might get five different answers. As enterprises prioritize efficiency and cross-functional teams adopt AI tools, the pressure mounts for researchers to adopt this technology thoughtfully without compromising rigor and quality of insights. 

But how? 

I got a glimpse at Learners’ annual Research Week, a free virtual and in-person UX research conference that is a summer highlight for many in the field. One of the most popular talks this year was Kaleb Loosback's (Staff Researcher, Instacart): “The AI-Empowered Researcher: How to Dance with the Devil and Keep Your Soul.”

It’s a provocative metaphor. “Dancing with the devil” refers to engaging in risky or dangerous behavior, often with the potential for dire consequences. But rather than simply engaging in the oft-used metaphor that we should “partner” or “collaborate” with AI, Kaleb’s talk offers a concrete framework that anyone can use to mitigate the risk of flawed outcomes. I think his talk resonated so well with the crowd because he not only demonstrated the wide range of AI tools that he used across the research cycle, but also because researchers at any size company or stage of research maturity can apply his prompting methodology. 

The basic premise is: garbage in, garbage out. Poor inputs are exponentially more likely to produce poor outputs. Kaleb’s CRAFTe framework is focused on giving research and product teams a simple way to make their prompt instructions clear and effective. Here’s what each letter stands for: 

C - Context

Who are you? What’s the background of the study? What are the business goals and research objectives? Who are the most important stakeholders, and what previous research has been done? Kaleb specifically advises to feed the AI your research plan. Yes, you still need to come up with a research plan that clearly defines the importance of the problem, what questions you want answered, how you want to answer them, and what you hope to do with the results. 

R - Role

Help focus the AI’s mindset and domain expertise by defining what role they should play. For example: “Act as a Principal Researcher with a PhD in HCI.” That’s who you want to partner with, right? :) Depending on the actions, you might want to prompt it to take on the role as CEO of your company or the VP of your division if you’d like to pressure test executive responses. 

A - Actions

Rather than ask a sweeping, broad request like “Give me the top three insights from these interviews,” you need to give numbered instructions to ensure the AI doesn’t skip important steps. For example: First, review the research plan. Second, review the participant transcript. Third, generate a session summary, etc. 

F – Format

You or your company might already have a preferred research report template. You can dictate this template to AI to reduce post-processing time. It might include bullet lists, citation styles, and even tone (informative, action-oriented, and collaborative). You can even list the target audience (product managers, marketing, sales) to hone in on strategic implications. 

T – Template

When possible, supply an actual template—session-summary doc, transcript analysis table, etc.—to guide the AI’s structure. For example, you may want each takeaway to come with two key participant quotes as evidence. Kaleb’s pro-tip is to ask the AI to generate a template from an example of a finished product.

e – examples

Show what an “ideal” output looks like. Kaleb insists this is the biggest lever for quality: once the AI sees model outputs, expectations align. 

As you can tell, this prompting methodology is rigorous. It helps ensure, as Kaleb puts it, the researcher leads the dance with the AI, rather than the AI steering the researcher across the floor. In a panel discussion at Learners on “UX Research in 2030,” Jane Justice Leibrock, the Head of UX Research at Anthropic, also made the point that it is crucial to keep control of the collaboration: 

“I’ve noticed something. I think there can be a first-thought problem with AI where if it becomes your habit to automatically think of it to solve any problem you may have, I find that immediately just asking it the question… is not nearly as good as sitting and thinking myself first [and then] giving AI the context of what I care about and what my hunches are.” 

If CRAFTe helps you dance with the devil, how do you save your soul? Kaleb points out four main points. 

  1. Always review and validate outputs. It’s your reputation on the line, not the AI’s. 
  2. Be mindful of ownership, privacy, and ethical concerns. Are the inputs things you have the right to upload? Are you working in a secure and sanctioned environment so uploads will not train public models (see Samsung debacle)? Protect participant data and PII (personally identifiable information).  
  3. Actively review for stereotypes and generalizations. AI outputs can reflect the biases and injustices of our societies. 
  4. Be transparent and disclose AI use. Kaleb offers a concise disclaimer as an example: “The analysis was conducted using a combination of AI-powered data extraction and human evaluation and interpretation, ensuring both efficiency and accuracy.”

As more research teams explore AI to speed up synthesis and content creation, many are realizing that better prompts — not just better tools — are the key to getting high-quality results. In fact, Kaleb used 11 (!) AI tools across his research process, from intake and scoping through analysis and presentation. 

How does CRAFTe integrate with a tool like Sprig? Sprig also believes that AI works best as a research accelerator, not as a lead dancer. While Kaleb’s talk highlighted a qualitative case study, CRAFTe is equally applicable to surveys as well–he wrote me that “you can easily modify the prompt to assist with drafting screener surveys, coding open ends, or even assisting you with R code.” 

As surveys are often difficult to design and interpret correctly, AI has the potential to ensure this advanced methodology is thoughtfully practiced. I was intrigued by Arianna McClain’s (Head of UX Research at OpenAI) comments in a panel discussion at Learners on “UX Research in 2030” about surveys in particular. As an IDEO alum, where qualitative research is king, her words had even more weight: 

“I always thought you did qualitative research and what you learned helps you build a survey, and then you use a survey to understand people at scale. And I think that with AI, it really is possible to talk to people at scale… A really well-written open-ended response [survey] can really take you far… I’m really excited for more people to hear what people have to say, instead of going in with an hypothesis with what you think people want.”

At Sprig, we are too.

Resources:

See Learners’ Conference Agenda: https://joinlearners.com/research-week/ai-and-uxr/ 

Watch Kaleb’s talk: https://www.youtube.com/watch?v=K02PVXI9maM&t=11941s 

Read Kaleb’s example CRAFTe Prompts: https://www.heykaleb.com/musings/ai-empowered-researcher-framework?utm_source=sprig 

Sign up for our newsletter

Actionable insights on faster research and better experiences, straight to your inbox.

linkedin icontwitter aka X icon

Written by

A Research Prompt Framework that Fixes Generic AI Responses

Anat Mooreville

She is is a design strategist and researcher who combines expertise in service design, qualitative research, and workshop facilitation to align stakeholders on user needs and inform strategic bets. Her experience spans financial services, healthcare, and innovation consulting, and she enjoys the great hiking in the Bay Area.

Related Articles

UXR Predictions for 2026: What Research Leaders Are Learning as AI Scales
Thought Leadership
Feb 5, 2026

UXR Predictions for 2026: What Research Leaders Are Learning as AI Scales

The Ground is Moving: What Research Leaders Told Us About Surviving (and Thriving) in 2025
Thought Leadership
Dec 9, 2025

The Ground is Moving: What Research Leaders Told Us About Surviving (and Thriving) in 2025

From Artifacts to Activation
Thought Leadership
Dec 3, 2025

From Artifacts to Activation

Solutions
Experience measurementConcept & prototype testingJourney & behavioral researchStrategic & foundational discovery
Agents
DesignFieldSynthesize
Deploy
Email, link, and panelsWeb apps and websitesMobile app
Community
EventsBlog
Company
About usCareersService agreementPrivacy policyData addendumSystem status
Socials
LinkedInX