January 15, 2021
The US Food and Drug Administration has issued a new action plan laying out the agency’s planned approach to regulation of software as a medical device (SaMD) that utilizes artificial intelligence (AI) or machine learning (ML).
The new action plan builds on FDA’s proposed regulatory framework for AI/ML-based SaMD, published in April 2019, and subsequent stakeholder feedback. Managed by the FDA Center for Devices and Radiological Health’s (CDRH) Digital Health Center of Excellence, the action plan entails the same total product lifecycle regulatory approach the agency has espoused via its Software Precertification (Pre-Cert) program for oversight of other SaMD and digital healthcare technologies in recent years.
FDA has identified five major components of the plan:
- Tailored regulatory framework development, including draft guidance addressing predetermined control plans for SaMD that “learns” over time;
- Support for developing good ML practices to effectively review and assess AI/ML algorithms;
- Building patient-centered approaches via device transparency and other methods;
- Establishing methods to evaluate and improve AI/ML algorithm performance
- Setting up real-world performance monitoring pilot programs.
Tailored regulatory framework
First, FDA plans to develop a tailored regulatory framework including what the agency refers to as a “predetermined change control plan,” intended to facilitate AI and ML algorithms designed to change and improve over time. FDA will issue draft guidance on the predetermined change control plan to garner additional stakeholder feedback, with a focus on elements to include in the plan to support safety and effectiveness of AI/ML-based SaMD algorithms.
The agency also plans to focus on refining which types of modifications and changes to algorithms are appropriate for inclusion in the AI/ML-based SaMD regulatory framework, as well as developing appropriate processes for premarket submission and review of these technologies.
Establishing Good Machine Learning Practice (GMLP)
Second, the agency intends to establish a set of AI/ML best practices related to data management, feature extraction, training and interpretability, evaluation, documentation and related areas. FDA notes ongoing collaborations with the Institute of Electrical and Electronics Engineering (IEEE), the International Organization for Standardization (ISO), the Association for the Advancement of Medical Instrumentation (AAMI) and other organizations to develop such best practices and establish consensus AI/ML practices. Furthermore, FDA representatives currently participate in the International Medical Device Regulators Forum’s (IMDRF) Artificial Intelligence Medical Devices Working Group to drive harmonization of future GMLP.
Patient-centered approaches including transparency
A patient-centered approach to AI/ML-based SaMD, according to FDA, encompasses the need for transparency of these technologies for patients and users.
“Promoting transparency is a key aspect of a patient-centered approach, and we believe this is especially important for AI/ML-based medical devices, which may learn and change over time, and which may incorporate algorithms exhibiting a degree of opacity,” the agency states in its action plan report.
Avoiding “black box” algorithm policies will prove challenging, however; transparency may require clear disclosure of data used to train SaMD algorithms, relevant inputs, logic used, evidence of performance and other information from manufacturers that may view such data as proprietary. FDA plans to hold a public workshop to identify suitable information for manufacturers to provide on AI/ML-based SaMD labels in order to meet transparency goals.
Sound regulatory science methods for algorithm performance
FDA also seeks a regulatory approach that targets bias and generalizability of AI/ML algorithms, and boosts their robustness.
Given that many AI/ML-based SaMD systems are developed using historical datasets, which may introduce vulnerabilities to bias. In order for these systems to more effectively perform across racially and ethnically diverse US patient populations, FDA intends to identify and promote regulatory science methodologies to improve algorithm performance.
Potential methodologies include those that identify and eliminate bias, as well as tools to enable algorithms to withstand changing clinical inputs and conditions, according to the FDA action plan. Such methodologies are currently under development via collaborations between FDA’s Centers for Excellence in Regulatory Science and Innovation (CERSIs) and institutions including the University of California San Francisco (UCSF), Stanford University and Johns Hopkins University.
Real-world performance (RWP)
Finally, FDA’s regulatory framework for AI/ML-based SaMD will involve adopting a total product lifecycle (TPLC) approach supported by real-world data. Performance data based on real-world use of AI/ML-based SaMD is expected to provide both manufacturers and regulators with insight as to how their technologies are being used; how their performance can be improved; and how to address safety and usability issues most effectively.
FDA, manufacturers and other stakeholders must still address several issues related to real-world performance data:
- Types of reference data needed to measure AI/ML-based SaMD performance
- Which oversight components should be performed by different stakeholders
- Amount and frequency of real-world performance data to be provided to FDA
- Effective validation and testing methods for algorithms, models and claims
- How to incorporate feedback from end-users into AI/ML-based SaMD training and evaluation
To address these questions, the agency plans to support a pilot program for real-world performance monitoring of AI/ML-based SaMD products.
Emergo by UL will provide additional updates on FDA’s AI/ML-based SaMD action plan as the agency provides them.
Learn more about US FDA SaMD and medical device regulations at Emergo by UL:
- US FDA 510(k) consulting for medical device, IVD and software companies