How do you use AI in your end-to-end workflow? It’s still murky waters out there, and if you ask five different researchers this question, you might get five different answers. As enterprises prioritize efficiency and cross-functional teams adopt AI tools, the pressure mounts for researchers to adopt this technology thoughtfully without compromising rigor and quality of insights.
But how?
I got a glimpse at Learners’ annual Research Week, a free virtual and in-person UX research conference that is a summer highlight for many in the field. One of the most popular talks this year was Kaleb Loosback’s (Staff Researcher, Instacart): “The AI-Empowered Researcher: How to Dance with the Devil and Keep Your Soul.”
It’s a provocative metaphor. “Dancing with the devil” refers to engaging in risky or dangerous behavior, often with the potential for dire consequences. But rather than simply engaging in the oft-used metaphor that we should “partner” or “collaborate” with AI, Kaleb’s talk offers a concrete framework that anyone can use to mitigate the risk of flawed outcomes. I think his talk resonated so well with the crowd because he not only demonstrated the wide range of AI tools that he used across the research cycle, but also because researchers at any size company or stage of research maturity can apply his prompting methodology.
The basic premise is: garbage in, garbage out. Poor inputs are exponentially more likely to produce poor outputs. Kaleb’s CRAFTe framework is focused on giving research and product teams a simple way to make their prompt instructions clear and effective. Here’s what each letter stands for:
C - Context
Who are you? What’s the background of the study? What are the business goals and research objectives? Who are the most important stakeholders, and what previous research has been done? Kaleb specifically advises to feed the AI your research plan. Yes, you still need to come up with a research plan that clearly defines the importance of the problem, what questions you want answered, how you want to answer them, and what you hope to do with the results.
R - Role
Help focus the AI’s mindset and domain expertise by defining what role they should play. For example: “Act as a Principal Researcher with a PhD in HCI.” That’s who you want to partner with, right? :) Depending on the actions, you might want to prompt it to take on the role as CEO of your company or the VP of your division if you’d like to pressure test executive responses.
A - Actions
Rather than ask a sweeping, broad request like “Give me the top three insights from these interviews,” you need to give numbered instructions to ensure the AI doesn’t skip important steps. For example: First, review the research plan. Second, review the participant transcript. Third, generate a session summary, etc.
F – Format
You or your company might already have a preferred research report template. You can dictate this template to AI to reduce post-processing time. It might include bullet lists, citation styles, and even tone (informative, action-oriented, and collaborative). You can even list the target audience (product managers, marketing, sales) to hone in on strategic implications.
T – Template
When possible, supply an actual template—session-summary doc, transcript analysis table, etc.—to guide the AI’s structure. For example, you may want each takeaway to come with two key participant quotes as evidence. Kaleb’s pro-tip is to ask the AI to generate a template from an example of a finished product.
e – examples
Show what an “ideal” output looks like. Kaleb insists this is the biggest lever for quality: once the AI sees model outputs, expectations align.
As you can tell, this prompting methodology is rigorous. It helps ensure, as Kaleb puts it, the researcher leads the dance with the AI, rather than the AI steering the researcher across the floor. In a panel discussion at Learners on “UX Research in 2030,” Jane Justice Leibrock, the Head of UX Research at Anthropic, also made the point that it is crucial to keep control of the collaboration:
“I’ve noticed something. I think there can be a first-thought problem with AI where if it becomes your habit to automatically think of it to solve any problem you may have, I find that immediately just asking it the question… is not nearly as good as sitting and thinking myself first [and then] giving AI the context of what I care about and what my hunches are.”
If CRAFTe helps you dance with the devil, how do you save your soul? Kaleb points out four main points.
- Always review and validate outputs. It’s your reputation on the line, not the AI’s.
- Be mindful of ownership, privacy, and ethical concerns. Are the inputs things you have the right to upload? Are you working in a secure and sanctioned environment so uploads will not train public models (see Samsung debacle)? Protect participant data and PII (personally identifiable information).
- Actively review for stereotypes and generalizations. AI outputs can reflect the biases and injustices of our societies.
- Be transparent and disclose AI use. Kaleb offers a concise disclaimer as an example: “The analysis was conducted using a combination of AI-powered data extraction and human evaluation and interpretation, ensuring both efficiency and accuracy.”
As more research teams explore AI to speed up synthesis and content creation, many are realizing that better prompts — not just better tools — are the key to getting high-quality results. In fact, Kaleb used 11 (!) AI tools across his research process, from intake and scoping through analysis and presentation.
How does CRAFTe integrate with a tool like Sprig? Sprig also believes that AI works best as a research accelerator, not as a lead dancer. While Kaleb’s talk highlighted a qualitative case study, CRAFTe is equally applicable to surveys as well–he wrote me that “you can easily modify the prompt to assist with drafting screener surveys, coding open ends, or even assisting you with R code.”
As surveys are often difficult to design and interpret correctly, AI has the potential to ensure this advanced methodology is thoughtfully practiced. I was intrigued by Arianna McClain’s (Head of UX Research at OpenAI) comments in a panel discussion at Learners on “UX Research in 2030” about surveys in particular. As an IDEO alum, where qualitative research is king, her words had even more weight:
“I always thought you did qualitative research and what you learned helps you build a survey, and then you use a survey to understand people at scale. And I think that with AI, it really is possible to talk to people at scale… A really well-written open-ended response [survey] can really take you far… I’m really excited for more people to hear what people have to say, instead of going in with an hypothesis with what you think people want.”
At Sprig, we are too.
Resources:
See Learners’ Conference Agenda: https://joinlearners.com/research-week/ai-and-uxr/
Watch Kaleb’s talk: https://www.youtube.com/watch?v=K02PVXI9maM&t=11941s
Read Kaleb’s example CRAFTe Prompts: https://www.heykaleb.com/musings/ai-empowered-researcher-framework?utm_source=sprig