Jul 21, 2022

Imagine: You are a product developer, and your human factors (HF) validation test is underway. Only a handful of use errors have been observed. However, you notice participants associating all their errors to the test environment or the task workflow. They explain the task is not realistic and outside of their expected environment and/or workflow. You get the HF validation test report back and it is filled with root causes related to test artifact!

What is “test artifact?”

The root cause “test artifact” attributes the use error to factors related to the test itself (e.g., methodology, tasks, environment, ancillary supplies) rather than the device or instructional materials under investigation. An HF validation report with a lot of test artifacts can be a red flag that the test was not conducted under representative conditions.

As stated in the FDA’s guidance Applying Human Factors and Usability Engineering to Medical Devices (2016), “During simulated-use human factors validation testing, test participants should be given an opportunity to use the device as independently and naturally as possible, without interference or influence from the test facilitator or moderator.”  It is in pursuit of this independent and natural use that we must strive to avoid test artifact.

Avoiding test artifacts

Ensure the environment and tasks are as natural and representative for your participant as possible. In a perfect world, we would observe participants in actual use. The following tips can help you reach closer to that ideal, even with simulated use testing.

  1. Closely simulate the environment. Ideally, participants are performing tasks as they would during actual use. However, when the simulated use-environment is significantly different than the actual use-environment (e.g., performing tasks in a conference room that are typically performed in an operating room), participants might have difficulties performing tasks in a realistic manner. Participants might then indicate such difficulties were due to the unrealistic setting. You can reduce the chance of such discrepancies with some deliberate tweaks. Consider what participants might see, hear, or feel in their real use environment (e.g., lighting levels, emergency conditions, distractions). Maybe even what they taste or smell! Additionally, present all materials and accessories to participants in a manner and time they might naturally acquire them. Closely simulating participants’ actual use environment can help them enter the mindset of using the device in real life, and, as such, help guide realistic use.
  2. Give the participant context. Often, noting that test participants are most likely simulating use of the device, the results of such use are imaginary and distant at best. This is also true for any harms that might be realized because of a use error – no actual harm is likely to occur because participants are simulating use. Providing opportunities for participants to connect to the scenario can help provide context and place them in the headspace of real use, resulting in more representative performance. For instance, asking participants to attach an injection cushion to their body over the site where they will perform a simulated injection helps your participant understand this medication would actually be going into their body. Similarly, asking surgeons to scrub up before entering a simulated OR environment can help the participant assimilate the test environment to an actual OR. A strong simulation (e.g., a prepared surgical model) can also set a very specific scenario in which your participants can more accurately perform the task. This all serves to give your participant context to their actions.
  3. Ensure scenarios follow actual use flow. Task prompts require a delicate balance of providing participants with enough information to complete the scenario, but not so much detail that it instructs participants in correct use. (Yes, artificial task success due to leading or biasing prompts can result in follow up questions from the FDA!) For instance, providing participants with three separate prompts to (1) “prepare a therapy,” (2) “attach the product to a patient,” and (3) “start the therapy” provides the participant with a list of tasks and the order they must be performed. A more naturalistic prompt would ask participants “to start a patient on the therapy.” Such a prompt requires the participant to know that to successfully complete the scenario, they must also perform the required preparation and attachment tasks. In scenarios that might include multiple methods of natural use, consider counterbalancing the task order to help account for task order-related artifact.

Carefully craft your test protocol by keeping test artifacts to a minimum to reduce the chance that the FDA calls your approach into question. By incorporating the three tips above, your HF validation test should minimize unintended effects on participant performance due to test artifacts. This helps to ensure test participants are interacting with the product as they would during actual use.

Layne Hartman is Human Factors Specialist at Emergo by UL’s Human Factors Research & Design division.

Additional HFE and usability resources from Emergo by UL’s Human Factors Research & Design team:

  • Human factors analysis for medical devices, IVDs and combination products
  • Human factors design and prototype development support
  • Medical device, IVD and combination product evaluation


  • Layne Hartman