Skip to main content
  • Insights

Four major questions to consider when validating mobile apps with connected devices

Designers have leveraged systems thinking and technology expansion to rapidly increase the capability and complexity of our medical device and combination products. However, such developments present a real challenge today: how can we connect apps and system components to deliver products that meet a variety of users’ diverse needs effectively and safely?

Girl monitoring medical device patch with smartphone

January 25, 2024

By Layne Hartman

Designers and manufacturers alike have continued to leverage systems thinking and technology expansion to rapidly increase the capability and complexity of our medical device and combination products. However, such developments present a real challenge today: how can we connect apps and system components to deliver a suite of products that meet a variety of users’ diverse needs in an effective, elegant, and ultimately safe way?

Designing safer, more effective medical devices

By connecting medical devices, we can achieve greater utility.

Consider, for example:

  • Increased consistency of health tracking (e.g., blood oxygen levels, heart rate) via wearable devices
  • Added accessibility by optimizing instructional materials for viewing on a mobile phone
  • Increased ease of use due to automating dose tracking with drug delivery devices (e.g., pen injectors, on-body injectors) linked to smartphone apps via Bluetooth®

These and other potential benefits of connected therapeutic devices make them attractive to users, manufacturers, and clinicians. However, with the added level of complexity and components, which might include the device delivering therapy, a mobile app, instructions, batteries (or recharging unit), and a transponder unit (e.g., if not built directly into the device), validating the use-safety of these products can be challenging. The U.S. FDA’s guidance and IEC-62366 both require representative use in a representative use environment for HF validation testing. Connected devices especially present challenges for developing representative use scenarios.

Developing an HFE plan for safer medical products

When developing a human factors evaluation plan for such products, the four following questions become particularly important:

  1. Could any of the system components be used independently? If individuals can use the device without the app or vice versa, both control options have unique user interactions that could potentially result in a critical error. For example, a common redundant control option is when users can power cycle the device both through the app and the device itself. Accordingly, both control options likely need to be addressed in the use-related risk analysis (URRA) and assessed in the Human Factors (HF) validation test. In the case of an app, study designers need to consider how interactions on the app and associated devices can be independently traced so that each component can be thoroughly assessed.                                                                  
  2. How can we test these devices under representative use? Does testing of each component need to be isolated, or can they be tested in tandem? Although it is important to test any device in representative workflows that would occur in natural use, connected devices present a particular challenge in that there is more than one device that participants might interact with during a given use scenario. Such interactions might raise a concern from a manufacturer’s standpoint because one component’s design “flaw” might affect the full system’s validation. To best simulate actual use, study designers should strive to enable users to move between system components as they would in actual use. For example, an app and associated device that require Bluetooth® pairing will likely require study participants to simultaneously use both components throughout a use scenario. However, other workflows (e.g., preparing the device, administering a dose) might require separation of use scenarios by component to successfully evaluate the workflows, especially for those previously-mentioned, redundant features.                                                                                         
  3. Can testing of one component bias the performance of another? In trying to independently assess an app and a connected device within the same HF validation test, we can introduce study bias. For instance, if all tasks associated with the app are evaluated first, skipping actions that would typically be taken on the device, then we have artificially introduced opportunities for users to learn how to use the device before its associated tasks are assessed. Enabling naturalistic workflows and tracing interactions between system components can help reduce this biasing while still ensuring effective data collection.                                                                                      
  4. Will the app and device be marketed as a “closed system?” From a regulatory and marketing perspective, the method by which the app and the device are presented to the FDA can affect how independently the app and the device should be assessed. For instance, if the app and device must be used together and will be marketed as such, then presenting them within the same HFE report as a “closed system” is appropriate. However, if they might be used independently, then presenting separate HFE reports might be necessary as they will likely be two separate submissions. In this case, developing a method to trace findings for the app and the device separately without introducing bias is pivotal, and might ultimately require two separate HF validation tests.

After answering these four questions, you might find that the answers create conflicts in the recommended approach! Study designers might have to consider which approach minimizes risk to the study integrity. Though creating a human factors evaluation plan for apps is challenging, answering each of these four questions carefully can help outline a human factors evaluation plan that thoroughly evaluates both system components under representative use while avoiding bias and meeting the manufacturer’s marketing needs.

 Layne Hartman is Senior Human Factors Specialist for Emergo by UL.

X

Request more information from our specialists

Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.

Please wait…