Skip to main content
  • Insights

Human Factors Considerations for AI-Enabled Products 

Discover key human factors issues for medical devices featuring artificial intelligence (AI) components. Learn how to design safer, more effective AI-enabled healthcare products with proper usability testing.

Human factors for AI-enabled medical devices and systems

October 24, 2025

By Michael Wiklund and Julee Henry 

As with many other aspects of modern life, Artificial intelligence (AI) stands to transform significant aspects of healthcare. One of its promising applications lies in closed-loop medical products that automatically deliver medication or treatment based on real-time physiological data. These products, such as insulin pumps, neurostimulators and implantable drug delivery mechanisms, are increasingly integrating AI to enhance their safety, effectiveness, adaptability and usability. While AI introduces the promise of autonomous function, thereby potentially simplifying interfaces, human factors engineering and usability continue to play a crucial role in shaping safe, usable and trustworthy systems that meet users’ needs.   

Potential advantages of AI integration  

AI stands to improve the performance of closed-loop devices in several ways that may not be attainable in a manually controlled form:  

  1. Adaptive dynamic optimization: AI algorithms can “learn” from individual patient data to fine-tune treatment delivery, adapting to changing physiological conditions related to a user’s daily activities and health condition. Similarly, AI can analyze long-term trends to optimize therapy over time, potentially improving outcomes and reducing side effects. 
  2. Proactive intervention: Machine learning models can anticipate adverse events, such as hypoglycemia or seizure onset, allowing the device to intervene proactively. For example, the product might adjust an ongoing therapy (e.g., reduce the insulin delivery rate) or emit an alarm calling for human intervention. 
  3. Reduced burden: Through automation, AI can reduce a user’s or clinician’s mental workload and the number of tasks to be performed. In turn, these changes may improve adherence to a therapy and improve quality of life. A patient may feel less anxious and enjoy improved health. A clinician may be able to concentrate on other important matters than those that have been effectively automated.  

Human factors and use-related risks for AI-enabled devices  

Despite the potential benefits suggested above, the integration of AI introduces new human factors challenges that must be addressed:  

  1. Changes to workflow: AI technology has the potential to change workflows and users’ roles in clinical decision making. Use-related risk analyses should account for changes to the task flow (e.g., changes from manual input to AI output oversight) and user’s responsibilities (e.g., clinicians need to remotely validate AI outputs or manage system overrides).    
  2. Transparency and trust: Users may struggle to understand how AI-driven decisions are made or how and when the AI model might evolve over time. Lack of transparency can erode trust and compromise technology adoption. 
  3. Overreliance: Automation can compromise “situational awareness” that is needed to recognize and manage product malfunctions and other aberrant situations. Therefore, the risks and benefits of AI-based automation must be considered and the use-related risks should take into account the potential to overlook AI limitations or failure to maintain appropriate human oversight.  
  4. Failure modes and recovery: AI-enabled products may fail in unexpected ways. Designing for graceful degradation, clear recovery methods and AI to human handoff when necessary is vital.  
  5. Alert fatigue. Excessive, AI-generated alerts can overwhelm users or possibly lead to “alarm blindness.” Human factors analyses are needed to find the balance point when it comes to providing alerts to keep users “in-the-loop” versus handling things autonomously. 
  6. Training and comprehension: Users should be provided the means (e.g., online videos, quick reference cards, in-person education) to improve AI literacy and the capabilities and limitations of their AI-enabled product.   

Designing AI-enabled devices for safer and more effective use  

To mitigate these risks listed above, developers should adopt a human-centered design approach to product development. While these activities are well known and might seem obvious, they are more important than ever as product development teams venture into new and unfamiliar territory shaped by AI automation. This includes:  

  • User research: Involving patients and clinicians early in the design process to ensure the product aligns with real-world needs and preferences. This is particularly important because AI can fundamentally change how users interact with the product, make decisions, and manage care.  
  • Use-related risk analysis: Identifying and evaluating risks focusing on use errors unique to AI that can lead to hazards such as automation bias, loss of situational awareness, incorrect information interpretation, or lack of or delayed human intervention.  
  • User interface design: Designing a user interface that helps users understand why the product made a particular decision and took a particular action, the limitations of the AI automation, and how to intervene in a safe manner (i.e., take control). 
  • Usability testing: Evaluating how users interact with the device under realistic but simulated conditions to identify design strengths and opportunities for improvement, particularly regarding use-related risks stemming from shifts in users’ roles and AI automation.  

Apply human factors to AI-enabled devices for optimal user experience  

AI automation stands to improve the safety, efficacy and usability of closed-loop medical products, creating better patient outcomes in myriad ways. As we navigate this uncharted territory, human factors specialists like us are excited to apply long-standing human factors engineering practices in very new contexts.  

To learn more about how Emergo by UL can help apply human factors to your AI-enabled medical device, reach out to us today.    

Michael Wiklund is a Leader of Life Science Industry Practice and Julee Henry is a Lead Human Factors Specialist.