Insights from our Boston Research Leadership Dinner
Last night, I hosted a dinner with research leaders from Glassdoor, SimpliSafe, MathWorks, SmartBear, Cengage and WEX here in Boston. Nine researchers. One question: How do we stay relevant in the age of AI?
What I thought would be a theoretical conversation about the future of research turned into something much more grounded. Everyone around the table was dealing with versions of the same tension right now: How do we move faster with AI without losing the rigor that makes research valuable in the first place?
The patterns that emerged weren't predictions about what might happen. They were descriptions of what's already happening in research teams across the country — the real problems, the real trade-offs, and the pragmatic solutions people are actually building.
The Context: Why This Conversation Matters Now
For the past two decades, research teams have been built around a simple model: researchers do the research. They design studies, conduct sessions, analyze findings, write reports. It's labor-intensive, slow, and requires specialized skills.
Then AI agents arrived.
Suddenly, there are tools that can draft survey logic. Synthesize themes from transcripts. Pressure-test research questions. Generate reports. The question everyone's asking is obvious: What's left for the researcher?
The answer, according to the people around that table, isn't "nothing." It's actually more interesting than that.
Pattern 1: AI Can Scale Research. But Nobody Wants to Hand Over Judgment.
Teams are already using AI to synthesize responses, spot themes, and speed up analysis. But — and this is the critical part — nobody's blindly trusting the outputs.
The best teams are pressure-testing everything.
One person described AI as a "review layer," not a replacement. Another talked about using agents to pressure-test research plans before studies even go live — catching flawed logic or biased framing before a single participant is recruited.
Here's what stuck with me: The smart teams aren't automating thinking. They're automating the operational grind.
This is a real shift in how research leadership thinks about their job. It's the difference between "AI does the work" and "AI handles the work I didn't want to do anyway, so I can focus on what actually requires judgment."
One person put it this way: researchers are shifting from being in the loop to being on the loop. Not removed. Elevated. Monitoring, guiding, deciding — the parts that actually matter.
Pattern 2: Speed Matters. But Trust Matters More.
Almost every team in that room is hearing some version of the same directive from leadership: "Do more with less."
It's reasonable. Research takes time. AI can help. So the pressure is on to move faster, to scale, to democratize research across the organization.
But here's the tension: you can't just move fast. You have to move fast responsibly.
One leader talked about working closely with legal to figure out how AI can fit into a highly regulated environment (education). Not fighting the process. Building trust through it. Making sure compliance isn't a blocker, it's built in from the start.
She's not frustrated about it. She's building something more resilient.
The companies getting this right aren't moving fast and breaking things. They're moving fast without losing rigor.
That distinction matters more than people realize. In the age of AI, speed without guardrails doesn't feel bold — it feels reckless. And researchers know that. They're the ones who understand what "rigor" actually means, and they're not willing to trade it away just because there's a tool that could theoretically move faster.
Pattern 3: The Real Bottleneck Isn't Getting Responses. It's Asking Better Questions.
This part of the conversation hit hardest.
One researcher talked about how often teams jump straight into fielding a study before slowing down and asking whether the question itself is even framed correctly. The study gets launched. Responses come back. But they're responses to the wrong question, asked the wrong way, without the right context.
Another person asked something that really landed: "If AI can write surveys, synthesize themes, and summarize findings… what's actually left for the researcher?"
The answer that kept surfacing, over and over: Better framing. Better questioning. Better judgment.
A lot of teams know they need research. But they don't always know how to ask questions without unconsciously validating a hunch they've already made. They ask leading questions. They forget to ask why. They frame problems as solutions.
That's where AI actually gets interesting.
Not replacing researchers. Helping teams think more like researchers.
Helping teams catch bias before a study launches. Helping people pressure-test assumptions before they collect data. Helping organizations ask the right question in the first place.
Because here's the thing: if you ask the wrong question really well, you just get a really well-answered wrong question.
Pattern 4: One Great Follow-Up Question Can Teach You More Than 100 Surface Responses
Someone brought up the pothole analogy, and I can't stop thinking about it:
You don't need to hit the same pothole 10 times to know it's there.
One sharp insight from the right user can unlock something huge if you follow it properly. The breakthrough moment. The moment when a pattern becomes clear because you went deeper with one person instead of shallower with a hundred.
But here's where most teams lose it: most feedback loops stop too early.
"This was frustrating." Okay… why?
"I couldn't figure this out." What were you trying to do?
"I'd probably leave." What would make you stay?
That's usually where the signal gets lost. The surface response comes in. It gets noted. And the team moves on before they've actually understood what's underneath.
I've been watching this firsthand at Sprig — when you move beyond static text boxes and actually have a system that can follow up conversationally, the data quality shifts immediately.
"This was frustrating" becomes "It was frustrating because I couldn't find where to change my settings, and I looked in three places before giving up."
A rating of 2 becomes a story about what went wrong and why.
Simple follow-up. Dramatically richer insight.
That combination of rigor + scale came up over and over throughout the night. It's not about doing less research. It's about doing research that actually gets to the why.
What This Means for Research Leadership Right Now
This is what strong research leadership looks like in 2026.
Not obsessing over AI tools. Obsessing over judgment. Rigor. Trust. And making sure the organization is solving the right problem in the first place.
It's not the researcher who can run the most studies. It's the researcher who can ask the best questions, pressure-test the assumptions, and make sure the team is moving fast and responsibly.
The researchers who are thriving right now aren't the ones trying to do everything faster. They're the ones asking: What was I doing before that I don't actually need to do anymore? What can I hand off so I can focus on thinking?
And that shift — that's the real opportunity.
If You're a Research Leader
If you're navigating this yourself — balancing speed with rigor, trying to figure out where AI fits into your team, wondering what your job actually is when tools can do so much of the work — you're not alone.
These conversations are happening everywhere. In every company. Every industry. Every size.
Big thanks to Kristen Barbour, Karolina Gryglewicz, Shelby Muschler, Harry Morgan, Deeptha Subramanian, Hasan Simanto, Athena Petrides, and Miwako Zosel for bringing real conversations to the table last night. For not settling for theoretical, for being honest about the tension, and for thinking out loud about how to navigate it.
If you're leading research teams and want to join a future dinner in Boston or another city, check out sprig.com/events or reach out directly. These conversations are worth having.
Because here's what I know after 12+ years in research: the field gets better when we talk to each other. When we share what's actually working. When we admit what we're struggling with. When we stop pretending we have it all figured out and start building the future together.
That's what happened last night. And I think it matters.