The Perception & Potential of AI
In recent years, the integration of AI into various domains has sparked both fascination and apprehension. The realm of UX research and product development is no exception, as professionals navigate the promises and pitfalls of AI-powered tools.
Let’s talk through the perception of AI and the potential of the technology to transform research.
Growing Interest Amid Anxiety: There’s a surge of interest in AI amongst researchers to understand AI. Google search trends show a jump in searches related to “UX Research AI” by 150% from 2022 to 2024. Many researchers are considering how to leverage AI to enhance user-centric practices and also how to actively incorporate AI into their workflows to varying extents.6
According to a study by User Interviews that polled 1093 UX professionals, 77% of respondents are using AI to some extent in their workflows.7 Notably, the UXR segment emerged as the most cautious demographic regarding AI integration.
One of the concerns by researchers is that the technology will create “bot-centered” research inside of “human-centered” research.
“If we take the U out of UX [by utilizing AI Users,] it’s not user-centered or customer-centric. It’s bot-centered research or design,” says UX researcher Debbie Levitt with Delta CX.
“It can easily lead to poor or incorrect strategies, decisions, products, or services. I imagine a future meeting about a project failure where someone in leadership asks why we thought this project was a good idea. Someone will say that we saved time and money by working with AI instead of users,” she adds.8
Furthermore, researchers highlight the risk of AI creating an echo chamber for business leadership and serving employer needs over the needs of the user.
“How much do my employer’s needs, KPIs, and goals align with serving people? That’s exactly [the researcher’s] job. [The researcher] should be the one who is squarely in those conversations and helping the client or employer negotiate and understand that,” said the advisor and tech researcher Tricia Wang on Rosenfield Review Podcast.
“The reason I think AI forces this is because when you’re in the culture of the user, where users are passive and the KPIs are meant to keep users engaged for as long as possible and get them to produce as much data, then if you add AI on top of that, I think it’s only going to exacerbate what we already have.
“Which is a long multi-century effort to engineer consent of the masses…to engineer people, to sway people…it will only do more of what we already know, than what we already see.”9
These perspectives underscore the ethical and strategic considerations inherent in AI-infused UXR practices. Several challenges also hinder their seamless integration into UXR workflows.
The Nielsen Norman Group identifies various limitations, including the inability of most AI tools to process visual inputs, generate contextually relevant insights, or maintain reliability and usability. Moreover, concerns regarding bias and lack of validation pose significant obstacles to the credibility and efficacy of AI-driven UXR approaches.10
Potential Benefits of AI: Despite the potential hesitations around AI, there are clear use cases for the technology - especially when the AI system has been built in partnership with researchers.
Efficiency stands out as the foremost benefit cited by professionals leveraging AI in their research endeavors. AI streamlines processes such as analysis, synthesis, and content creation, thereby augmenting productivity and scalability.
The research community argues though that AI technology would need future iterations to truly serve researcher needs. Here are six key iterations as identified by Nielsen Norman Group:
- Incorporating Diverse Data Sources: These AI research tools should be designed to accommodate a wide range of contextual information, such as study goals, research questions, participant details, and prior research findings. This flexibility allows for a more comprehensive analysis.
- Supporting Edits and Collaboration: Effective AI tools should allow researchers to easily edit and correct the system's outputs. Collaboration between AI systems and human researchers is crucial, especially given the current dependence on transcripts. While AI technology is advancing, these systems still require human guidance to ensure their accuracy and relevance.
- Providing Clear References for Validation: Researchers need mechanisms to cross-check the AI system's conclusions. This means the tools should offer clear references that indicate the source of specific insights, such as a particular session or an observer's note. This feature helps researchers validate AI-generated findings and maintain data integrity.
- Emphasizing User Experience and Reliability: Despite using advanced technology, these AI tools must prioritize usability. A smooth user experience is crucial, particularly for innovative technology. Reliability in operation and design should be a given.
- Supporting Video and Webcam Analysis: For comprehensive usability testing, AI tools must be capable of processing visual inputs like video footage and webcam recordings. Tools that claim to analyze usability based solely on transcripts are likely missing critical context.
- Ensuring Accurate Promotional Claims: AI research tools must present honest and accurate information about their capabilities. Misleading or exaggerated promotional claims can lead to disappointment and mistrust. Accurate marketing is essential for establishing credibility in the AI research tool market.11
By addressing these enhancements, AI tools can evolve into invaluable collaborators, augmenting rather than supplanting human expertise in the pursuit of user-centric innovation.