Research Project Outcomes: How conversational UX design is shaping AI

Author: Tech Ethics Lab

Illustration of two women working on conversational AI scripts

Artificial Intelligence is transforming conversational user experience (UX) design, fundamentally reshaping the landscape. As UX conversational designers work to craft seamless, intuitive, and engaging digital experiences, AI is proving to be more than just a tool—it is unlocking a world of new possibilities. However, AI is also reshaping the role of UX designers, introducing challenges such as navigating ethical considerations and privacy concerns.

AI, a term coined in 1955 by Stanford Emeritus Professor John McCarthy, who defined it as “the science and engineering of developing intelligent machines,” now includes the crucial yet often underappreciated role of UX designers. These professionals are the architects of human-centered interactions with intelligent systems, shaping the user experiences of virtual assistants and voice-controlled technologies like Siri, watsonx Assistant, and Alexa. They aim to ensure these interfaces are accessible to all users by following best practices and inclusive design principles.

Elizabeth Rodwell, an assistant professor of digital media at the University of Houston, researches interactions between human users and computer-generated partners. In 2022, she was awarded a fieldwork-based grant from the Notre Dame-IBM Technology Ethics Lab. Her study involved spending seven months at a Tokyo-based startup, OratioNet, which specializes in developing AI-driven conversation design. (OratioNet is a pseudonym that preserves the company’s privacy.)

“I chose [OratioNet] because the organization includes linguists, UX experts, and computer scientists focused on human-computer interaction,” Rodwell says. “Fieldwork shows me how people work and the kinds of things they discuss organically. It’s about living with them and their challenges.”

Human-in-the-loop approach

At OratioNet, Rodwell noted the implementation of the “human-in-the-loop” (HITL) approach, an iterative feedback process where humans interact with an algorithmically generated system. HITL ensures continuous learning from user interactions, enabling AI responses to be adjusted and the technology to evolve in a more user-friendly and ethical manner.

The HITL approach reveals hidden complexities. In a video on a UX designer’s screen, Rodwell observed a man eating lunch during a language placement exam, where clear pronunciation is essential. Chewing interfered with his speech, making it difficult for evaluators to accurately assess the test taker’s skills.

“These recorded videos are crucial for AI audits,” Rodwell says, “providing insights into contextual factors and aiding in understanding how the AI’s speech recognition system is perceived and experienced.”

This example highlights the importance of tailoring prompts for specific needs. Rodwell explains that before a session begins, OratioNet's AI instructs the user to ensure they are in a quiet environment by saying, “You need to be in a place where I can hear you clearly,” and then asks, “Are you ready?” to confirm before a session begins.

By observing how users engage with the system in real-world scenarios, UX designers identify pain points and areas for improvement. This analysis helps to understand consumer behavior, preferences, and challenges.

The role of conversational UX designers

Conversational UX designers craft dialogues, develop conversation maps, and continually refine AI responses based on user feedback. This iterative process, known as “human tuning,” is essential for adapting AI systems to the diverse ways humans express themselves.

Rodwell discovered that conversational UX design goes beyond technological practice and is deeply rooted in culture. Understanding nuances like communication styles, social norms, and everyday behaviors is essential in UX design to ensure that AI interactions respect diverse backgrounds.

For example, cultures that avoid confrontation, such as Japan, use more subtlety and rely heavily on nonverbal cues in their communication styles. In contrast, people in the U.S. value straightforward communication, with less reliance on circumstances.

At OratioNet, Rodwell encountered a situation where the AI's instructions were misunderstood, and a user was unaware of the UX designers' role in his interaction with a smart home device. (To enhance the system's effectiveness, OratioNet reviews each human-computer interaction, analyzing how participants engage with the AI. This method provides valuable insights into user comprehension and navigation, identifying areas of confusion.

“In one session, I observed a man wearing only his underwear, clearly indicating a misunderstanding that a human would review the video,” Rodwell says. “Effective AI scripts must be designed to clarify the human role in the process.”

Enhancing conversational AI with inclusive design

A significant challenge lies in addressing the needs of all users, especially those from marginalized groups. Despite the best intentions of conversational UX designers, there is often a tendency to cater to the “average” consumer, leaving many underserved. This issue is further exacerbated by prioritizing efficiency over diversity.

Rodwell believes a significant computer science and engineering misstep is treating the UX process as an afterthought.

“Many organizations fail to recognize the full value of UX design, leading to the underutilization and undervaluation of UX professionals,” Rodwell says. “These experts concentrate on technical and structural aspects of user flows and conversational elements. The value placed on UX designers often mirrors an organization’s overall commitment to user experience and innovation.”

Gender representation in AI

Virtual assistants and voice-controlled systems, frequently personified as women, reinforce stereotypes of female subservience and exacerbate historical inequalities. Additionally, as these technologies become more prevalent, addressing harassment and hate speech directed at them remains a persistent challenge.

For instance, Amazon's Alexa had to be reprogrammed to implement a “disengage mode” to handle inappropriate user interactions, a response necessitated by frequent misuse.

“Conversational AI often defaults to a helpful and likable female persona,” Rodwell says. “Male OratioNet users flirt with the AI at particularly inappropriate times, such as during educational sessions.”

Increased transparency could improve understanding how users engage with virtual assistants and voice-controlled systems, particularly concerning perceived gender cues. Rodwell suggests that open-source conversational AI may offer a promising solution.

The open-source debate 

Open-source AI promotes inclusivity in technological development and potentially offers users more control—but achieving broad consensus remains challenging.

Companies are reluctant to share high-quality training data, a crucial asset in AI development, as it provides a significant advantage. Sharing this information, though, could significantly boost their bottom line. Economists at Harvard Business School recently found that open-source software has saved companies nearly $9 trillion in development costs by allowing them to build on free, high-quality software.

A collaborative approach has also helped larger companies establish robust ecosystems around their products, as Google’s success with its open-source Android operating system demonstrates. However, “companies are unlikely to adopt this approach due to intense competition,” Rodwell says.

Ultimately, the industry must dispel the myth of technology's neutrality and recognize that effective governance, community management, and a steadfast commitment to inclusive practices are essential for ensuring that virtual assistants and voice-controlled systems are ethical, trustworthy, and responsible.

Key takeaways

AI’s profound ability to mimic human intelligence is transforming digital interactions, posing challenges for UX designers. Rodwell’s Tokyo research highlights the need for a cultural shift towards human-centered design, emphasizing the importance of understanding user behavior, communication styles, and social norms.

Rodwell advocates for open-source AI, emphasizing the importance of continuous user research, feedback, and audits to enhance conversational AI’s inclusivity and effectiveness. However, companies often resist sharing high-quality training data, which is crucial for AI development due to its competitive value.

Rodwell emphasizes the importance of prioritizing human-centric experiences and practically applying design ethics. She asserts that companies must conduct continuous user research, regularly update AI systems with new data, and actively gather feedback—essential responsibilities for UX designers.

. . .

Since 2021, the Notre Dame-IBM Technology Ethics Lab has issued calls for proposals to support interdisciplinary research in technology ethics. The 2022–2023 CFPs, focusing on “Auditing AI,” emphasized the need to evaluate and ensure ethical standards in AI systems. Among the 15 selected projects was a proposal by Elizabeth Rodwell from the University of Houston, “User Experience (UX) Work as an Auditing Opportunity: Exploring the Role of Conversational UX Designers.” Her fieldwork-based research examines how UX professionals in conversational AI address social inequities and integrate ethical considerations into UX design. The Notre Dame–IBM Technology Ethics Lab, a critical component of the Institute for Ethics and the Common Good and the Notre Dame Ethics Initiative, promotes interdisciplinary research and policy leadership in technology ethics and is supported by a $20 million investment from IBM.