Aug 13, 2019


For most of its young life as a technology discipline, user experience (UX) has been focused on graphical user interfaces or GUIs displayed within a variety of form factors (e.g., desktop computer monitor, mobile phone display, bedside monitor in a hospital). User research and design in software have, therefore, focused on the anticipated and existing interactions that humans have with what can be seen. Although there was some notion that what was seen was the result of hidden digital logic, users of software felt, and indeed had, more agency when it came to sharing information and completing transactions on their own behalf.

With the advent of machine learning and artificial intelligence (AI), there has been a shift in the level of human agency in the technology experience. Embedded awareness has reduced the amount of human effort it takes to search online for information (think of location sensitivity in a search engine), and customization has surrounded us with our own opinions and preferences (think of online advertisements that are sensitive to online behavior).

User research and data science

How, therefore, do user researchers and designers address human needs and preferences in a world where more experiences are beyond our senses and where technology-driven decisions determine our interactions in day to day life? New methods and relationships are needed to ensure human agency in an AI world.

From a user research point of view, one key relationship is that between the user researcher and the data scientist. Datasets are the bedrock of algorithms. Without data, algorithms are advanced mathematical equations without a purpose. With data, algorithms begin to characterize an experience by identifying and prioritizing patterns – patterns of human behavior, patterns of disease, patterns of human identity. Without correct and complete data, algorithms can produce erroneous outcomes that could as easily take away a privilege as bestow one.

Data scientists are experts in coding and cleansing data. They optimize datasets so that algorithms can more easily consume the data. Dataset curation is art and science, and the more data that is available, the more a machine can learn. The risk is that data is incomplete, and worse, incorrect. In the case of medical diagnosis, incomplete and incorrect data could mean the difference between life and death.

User researchers and data scientists can exercise agency by working together to ensure that data is designed based on the human experience. Data design could take the form of research-based personas that are compared to how people are characterized in a dataset. Data design could also take the form of a mental model where research-based human perceptions of system processing are compared to machine learning outcomes to emphasize human decision making patterns.

Explainable AI and user experiences

A key challenge for designers is how and when to show that AI is behind a user experience. A growing number of manufacturers and industries are introducing AI to the forefront of their product and services, including healthcare, where AI’s presence is immediately visible in an interface or has become the main feature (Siri or Alexa for example). The challenge for designers is how AI can become part of a product or system in a way that is accepted, and users are comfortable with the decisions being made for them by it, especially where their health is concerned.

Explainable AI (XAI) refers to techniques in AI which can be trusted and easily understood by humans. It contrasts with the concept of the "black box" in machine learning where even the designers and developers may not be able to explain why the AI arrived at a specific decision. With XAI, if the user does not agree with the AI-driven decision, they will at least be able to make sense of how it arrived at the decision. However, before that can happen, AI must be able to easily convey to the user how it came to a decision.

The ‘think aloud’ method is often used in user research as a valuable and effective way of finding out user thought processes and gaining insights while they complete tasks. This method could be translated to explainable AI whereby the thought process (in this case the algorithm) used by AI could be communicated in a way that can be easily understood by humans (via graphics or text).

XAI applied to healthcare settings

Where AI is used within medical software to give a patient diagnosis, software could generate text based on patient provided information and display its reasoning using terms and phrases that can be easily understood by the user, making information presented to both clinicians and patients easier to understand and perhaps to accept.

Designers have an amazing ability to empathize with users. Aside from determining what needs users have and what features they desire, designers can also help with understanding the best way for AI to communicate with users and visualize what this conversation might look like. User research will always be an invaluable part of the development process as it helps designers and developers to understand user behaviors, needs and motivations. Early research should focus on understanding the way users react to and interact with AI, helping designers design the conversation.

While it is important for AI to explain its processes, there must a delicate balance between providing too much information, causing cognitive overload, and providing too little information where the user still does not fully understand how or why the decision has been made. This is especially crucial in healthcare, when AI may begin to replicate or replace the role of the heathcare professional where the first point of contact is no longer human.

As humans we have a great ability to understand, interpret and communicate. We can put into words and eloquently describe very complex processes. When implementing AI within software to be used as a medical device, both the user researcher and the designer have great responsibility to exercise agency to facilitate in an unbiased manner the previously unspoken dialog between human and AI.

Mary Burton is User Experience Director and Oliver Cook is Human Factors Specialist at Emergo by UL’s Human Factors Research & Design division.

Related medical device human factors resources from Emergo by UL:

  • HFE user research for medical devices and IVDs
  • Human factors design and prototype development support
  • Whitepaper: Applying human factors to wearable medical devices


  • Mary Burton and Oliver Cook