July 14, 2025
This is the first of two regulatory updates on the EU General-Purpose AI Code of Practice.
On July 10, the General-Purpose AI Code of Practice (“the Code”), facilitated by the European AI office, was published by the EU Commission. Whilst the adequacy of the Code will be assessed by Member States over the coming weeks, this marks a key development in the implementation of the EU AI Act. Although still voluntary, the Code serves as a critical compliance baseline that we can expect to form part of the conformity assessment for devices comprising such models.
In this first regulatory update on the Code, we focus on the risk tiers, structure and the transparency and copyright requirements.
General-purpose AI risk tiers
General-purpose AI models (GPAI) are classified as GPAI models and GPAI models with systemic risks. A GPAI model presenting systemic risks that is then integrated into a high-risk AI system will therefore have to comply with requirements for high-risk AI systems, as well as requirements for GPAI models with systemic risk for the AI model part. Note that the majority of AI-enabled medical devices and IVDs are likely to be categorized as high-risk AI systems.
One code, three chapters
The code is provided in three chapters: transparency, copyright and safety and security. The chapter on safety and security, the most comprehensive chapter, is only applicable to providers of GPAI with systemic risk. The chapters on transparency and copyright apply to all GPAI.
Transparency: Know your model
Comprising three measures: (1) up-to-date documentation, (2) provision of relevant information and (3) supporting quality, integrity and security, the chapter on transparency emphasizes that even when a model hasn’t been built in-house, manufacturers must perform due diligence and verify the compliance of these models. Providers are expected to have detailed documentation on the model’s capabilities, intended uses, limitations, training data sources, evaluation methods and risks, etc.
Relevant information must be made accessible to the AI office and downstream providers. Medical device manufacturers incorporating GPAI must understand, document and communicate its appropriate use. For medical device manufacturers familiar with the Transparency Guiding Principles for Machine Learning-Enabled Medical Devices, incorporating the additional requirements will not require significant effort. There is also a useful Model Documentation form that includes all the information necessary for compliance with Measure 1.1 – use it!
Copyright compliance
Five measures are outlined to support compliance with EU copyright law: (1) implementing and keeping up to date a copyright policy, (2) confirming that only lawfully accessible content is used in web crawling, (3) identifying and complying with rights reservations such as those expressed via robots.txt or metadata, (4) implementing safeguards to prevent copyright-infringing outputs, and (5) providing a mechanism for rightsholders to submit complaints. For medical device manufacturers, this adds additional layers of regulatory diligence. When utilizing third-party models, manufacturers must assess whether their vendors have taken appropriate steps to lawfully acquire and use training data. Supplier documentation and risk management procedures will need to be updated to reflect this.
Practical next steps
Medical device manufacturers can identify where GPAI is already in use or planned for use and confirm that they appropriately classify it. Third-party suppliers should be engaged as early as possible to initiate contract updates and request relevant documentation. Build on existing quality management systems. Identify gaps and take steps to remediate them. Acting early is key to building trust with regulators, end-users and third-party suppliers.
GPAI compliance deadlines are approaching quickly
The obligations on GPAI come into force on August 2, 2025. New GPAIs placed on the market are expected to comply by August 2026 and existing models by August 2027. While much public discourse has focused on high-profile models such as GPT-4, Claude and Gemini, MedTech manufacturers don’t need to deploy trillion-parameter models to be impacted. Many AI-driven features, from administrative automation to clinical decision support to diagnostic triaging, are being powered by general-purpose models fine-tuned for healthcare applications.
Manufacturers of these models and/or devices incorporating them now face increased expectations around transparency, risk management, data provenance and model governance, even if they are not the original developers. The first step of any successful project is regulatory strategy and planning: review what is used or will be used, categorize risks, perform a gap assessment and determine the plan for future steps.
Our next regulatory update on the Code focuses on the additional safeguards relevant to GPAI with systemic risks.
Request more information from our specialists
Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.