February 2, 2026
By Julee Henry and Tess Forton
The U.S. Food and Drug Administration (FDA) issued a draft guidance document, Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations in January 2025. Although “human factors engineering” (HFE) is not explicitly mentioned in the title, the guidance offers fairly extensive and detailed HFE considerations unique to AI-enabled devices. The guidance describes recommendations for use-risk management, user interface design, validation testing and human factors documentation. In this article, we explore the key takeaways from the guidance, including how it impacts HFE activities and user interface (UI) design for AI-enabled medical devices.
Usability with respect to the Human-AI Team
Understanding how humans perceive system outputs has always been a core aspect of human factors. AI-enabled device outputs tend to be more variable and are often tailored to individual users, bringing greater need to understand not only the device’s performance but the collaborative dynamics between intended users and AI systems. The guidance frames the concept of AI-enabled device usability as consistently and correctly understanding, interpreting and applying information from the device. The guidance introduces the terms "Human-Device Team" and “Human-AI Team,” emphasizing the importance of understanding the collaborative dynamics between how healthcare professionals (HCPs), patients and caregivers interact together with AI outputs in real world workflows. The guidance also places a significant emphasis on transparency as it relates to usability; users must understand the limitations and intended use of the AI technology.
Use-related risk analysis
As with previous HFE guidance, this guidance highlights the need for a comprehensive risk analysis. However, this new guidance emphasizes the importance of addressing risks arising from how users interpret and apply AI outputs. This analysis should consider risks associated with a lack of information, unclear information and misunderstood or misused information, taking into account the full device use lifecycle. The analysis should also consider risks related to over-reliance or under-reliance on AI recommendations and the potential for automation bias. These factors extend beyond traditional human factors concerns and require manufacturers to assess how users interact with AI outputs under varying conditions, including uncertainty and evolving system performance across the device lifecycle.
Correspondingly, manufacturers will need to consider new types of mitigations, such as displaying confidence levels for AI outputs and requiring users to “check the AI’s work” if confidence levels fall below a certain threshold.
Human Factors Validation Testing
This guidance introduces nuanced shifts in how manufacturers should approach usability testing for AI-enabled devices. While still focusing on critical tasks and the need to assess intended users’ ability to complete them, this guidance highlights the importance of evaluating cognitive and perceptual risks associated with interpreting outputs, even if physical interactions with the device are relatively simple. The FDA guidance emphasizes evaluation of the human-AI team amongst intended users; for example, a manufacturer of an AI-powered sepsis prediction tool should assess that clinicians understand what triggered an alert and how confident the AI is in that alert. The guidance also suggests that in some cases, manufacturers might need to evaluate the performance of the human-AI team as compared to a human using a comparable device without AI capabilities.
Additionally, given the guidance’s focus on transparency, HF evaluations should assess whether users can appropriately determine when and how to apply AI-generated information or recommendations and recognize when to exclude, override or question the outputs. Appendix D in the guidance provides specific details to consider during usability evaluations.
Documentation Requirements
The FDA remains consistent in their expectation that HF-specific documentation must be submitted as part of the "Device description" in market submissions. Aspects unique to AI-enabled devices include describing:
- The device inputs (whether manual or automatic) and outputs
- An explanation of how AI is used to achieve the device's intended use
- A description of users and user characteristics, including users who are involved in decisions about how the device fits into clinical care
- Description of how AI fits into the workflow, including the degree of automation that AI provides, clinical circumstances for use and a description of how outputs are used in the context of workflows
- Description of any AI calibration and configuration procedures that must be regularly performed by users to maintain performance
- A device description that describes all configurable elements of the AI-enabled device and the impact of configuration on decision making
User interface design considerations
In Appendix B, the guidance describes detailed user interface considerations related to transparency, providing the right information at the right time, understanding user characteristics and needs, communication style and format and information visualization. Additionally, the guidance asserts that labelling should include a statement that AI is used in the device as well as details about the AI model itself (e.g., AI’s role in the intended use, detailed information about the model inputs and outputs, model limitations, etc.). A "Model card," as described in detail in Appendix E, is suggested as a new labeling element to provide this information. In designing AI-enabled devices, the user interface should strike a balance between building user trust and preventing over-reliance on outputs or acceptance at face value.
Conclusion
This guidance highlights a new way of thinking about HFE in the context of AI-enabled devices, requiring manufacturers to evaluate how users interpret evolving AI outputs, manage uncertainty and maintain appropriate levels of reliance on machine-generated recommendations, reframing the user not just as an operator, but as a collaborative partner in decision-making alongside AI. Given this shift, it is more important than ever to deeply understand the diverse perspectives, experiences and expectations users bring to their interactions with such devices, especially in high-stakes environments like healthcare.
In summary, here are three key takeaways of the guidance’s HF implications.
- Human-AI team dynamics are central to usability. Manufacturers should assess how users interpret, apply and act on AI outputs in real-world workflows, not just how they operate the device.
- Risk analysis and validation testing must address cognitive and perceptual risks, including misinterpretation, misuse, under- and over-reliance and uncertainty of AI outputs and recommendations, especially as AI outputs evolve.
- The user interface design must support transparency, providing the right information at the right time and guiding users to engage critically with AI outputs while clearly communicating the AI’s role, limitations, and impact on decision-making.
Contact our team to learn more about HF activities for AI-enabled medical devices. Or, sign up for a complimentary account with OPUS, our team’s software platform that provides HFE training, tools, and templates to help you complete cornerstone HFE activities.
Julee Henry is a Lead Human Factors Specialist and Tess Forton is a Managing User Researcher at Emergo by UL.
Request more information from our specialists
Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.