Skip to main content
  • Regulatory Update

GPAI with Systemic Risks: Additional Safeguards

Here we focus on the additional safeguards that apply to GPAI models with systemic risks.

graphic checklist

July 17, 2025

By Sade Sobande

This is the second of two regulatory updates on the EU General-Purpose AI Code of Practice.

On July 10, the General-Purpose AI Code of Practice (“the Code”) was published by the EU Commission. We previously examined the foundational requirements of transparency and copyright, which apply to all GPAI models. This regulatory update focuses on the additional safeguards that apply to GPAI models with systemic risks. The requirements also apply when these models are integrated or form part of an AI system.

What constitutes “systemic risk”?

Under the AI Act (AIA), a GPAI model may be considered to have systemic risk if it meets certain technical and societal thresholds:

‘systemic risk’ means a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain (AIA Article 3 (65))

The majority of medical device manufacturers will likely not be building these models, though; they will be integrating them. Medical device manufacturers that integrate such models will inherit systemic-risk obligations under the Code, particularly where model behavior could influence clinical decision-making or patient outcomes.

Additional safeguards: safety and security

As mentioned in our previous regulatory update, the code is split into three chapters: transparency, copyright and safety and security. The most comprehensive chapter, and only applicable to GPAI with systemic risks, is the one on safety and security. This chapter lays out 10 commitments that form the backbone of a proactive safety and security approach.

Key takeaways for medical device manufacturers:

  1. A safety and security framework needs to be established. The underlying principles of ISO 14971 may be utilized, but this must be expanded to be relevant to the context and use of GPAI being deployed.
  2. Model capabilities and risks must be comprehensively documented. Assumptions should not be made regarding pre-trained and third-party models. Manufacturers must conduct their evaluations to identify limitations, edge cases, foreseeable clinical, societal and security risks specific to the intended use.
  3. Use internal and/or external expertise to conduct robust safety and security risk analyses that account for foreseeable harms in the deployment of the GPAI model in real-world settings. Consider AI-specific misuse and emergent behavior.
  4. Involve domain and end-user expertise, such as clinicians, data scientists and patient representatives, to confirm diversity and uncover any blind spots or bias. This can be aligned with existing human factors/usability engineering processes. Risk management responsibilities should be shared, clearly defined and transparent.
  5. Design and run adversarial testing to test models. Test the adversarial robustness of your GPAI model and build in safeguards to prevent, for example, manipulation or degradation of model performance.
  6. Implement and document appropriate safety and security risk mitigation measures for reasonably foreseeable risks identified through model evaluations or adversarial testing.
  7. Monitor AI model behavior post-deployment. If there is a shift in model performance, accuracy or behavior, post-market surveillance plans should be updated
  8. Security measures should be embedded and account for AI-specific threats.
  9. Manufacturers must enable incident reporting and response mechanisms. These mechanisms are already established under the MDR/IVDR but should be adapted to accommodate safety and security issues related to AI outputs.

In line with MDR, all documentation must be retained for 10 years after model deployment. Additionally, the framework and model documentation shall be made publicly available to foster trust and transparency.

Implications for medical device manufacturers

The Code introduces model-level accountability. GPAI models should be treated like any critical material or component, and AI vendors as critical suppliers, subject to the same level of scrutiny, qualification and oversight. Manufacturers will need to request and review detailed documentation from AI vendors, including risk evaluations, cybersecurity practices, and monitoring mechanisms.

When gaps in the documentation exist, manufacturers may need to enhance complementary safeguards, such as implementing GPAI-specific requirements in risk management and post-market surveillance systems, as well as the overall product regulatory strategy.

AI safeguards to be expected by regulators

The safeguards outlined in the Code will become practical expectations that regulators will use to assess the readiness to manage AI-related risks. Now is the time for manufacturers to take a proactive and strategic approach to where GPAI models enter the product lifecycle, engage with vendors to understand their compliance efforts and identify any necessary complementary processes.

Regulatory strategy must cover both device and model-level governance to confirm that AI-enabled devices are not only compliant but credible and trusted in an increasingly high-scrutiny environment. As a reminder, the obligations on GPAI come into force on August 2, 2025. New GPAIs placed on the market are expected to comply by August 2026 and existing models by August 2027.

X

Request more information from our specialists

Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.

Please wait…