Chapter V of Regulation (EU) 2024/1689 covers Articles 51 to 56 and governs general-purpose AI (GPAI) models. Unlike the high-risk AI system provisions, which are directed primarily at deployers, Chapter V is addressed primarily at providers of GPAI models. But the boundaries between provider and deployer are not fixed. Article 25 creates a mechanism by which a deployer becomes a provider when they exercise certain types of control over a GPAI model. Understanding that mechanism is essential for any organisation whose AI agent product is built on a foundation model.

Key takeaways

  • GPAI obligations under Articles 51 to 56 took effect on 2 August 2025 under Article 113(2). They are not subject to the proposed Omnibus delay. GPAI providers have been under obligation for nine months already.
  • Article 25(1) defines when a deployer becomes a provider: when they put a system under their own name or trademark, when they substantially modify a GPAI model, or when the modification produces a system with GPAI capabilities. Each trigger has distinct practical implications.
  • Article 52 imposes baseline obligations on all GPAI providers: technical documentation, downstream information for integrators, a copyright compliance policy, and a training data transparency summary.
  • Article 53 imposes additional obligations on providers of GPAI models with systemic risk (those trained with more than 10^25 FLOPs): adversarial testing, incident notification to the AI Office, cybersecurity measures, and energy consumption reporting.
  • The GPAI Code of Practice, facilitated by the AI Office under Article 56, is the compliance pathway that providers following the Code can use to demonstrate conformity with Articles 52 and 53.

When did GPAI obligations come into effect?

Article 113 of Regulation (EU) 2024/1689 sets the application dates for different parts of the regulation. The general application date for most provisions is 2 August 2026. The GPAI model obligations in Chapter V have an earlier application date: Article 113(2) provides that Articles 51 to 68 apply from 2 August 2025, twelve months after the regulation entered into force. This means GPAI providers have been subject to binding obligations since August 2025.

The significance of this earlier date is frequently overlooked in compliance planning. Operators focused on the 2026 high-risk deadline have sometimes treated the GPAI chapter as a future concern. It is not. An organisation that deployed a fine-tuned GPAI model under its own brand after August 2025, and whose activities constitute provider-level operation under Article 25, was under Article 52 obligations from the date of deployment. The AI Office has authority to investigate and enforce Chapter V obligations now.

The Digital Omnibus on AI, which proposes to push the high-risk AI obligations from August 2026 to December 2027, explicitly does not affect Chapter V. The GPAI timeline is unchanged.

The Article 25 provider transition: three triggers

Article 25(1) of Regulation (EU) 2024/1689 identifies the circumstances under which a distributor or deployer acquires provider obligations. Understanding the three triggers is critical for any organisation deploying GPAI-based products.

The first trigger is placing an AI system on the market or putting it into service under the deployer's own name or trademark, where the deployer has made changes to the system that go beyond the provider's intended use conditions. A company that takes a commercially available GPAI model and offers it to its own customers as "Brand X AI Assistant," where the offering is distinct from the original model product, will generally be treated as a provider of the resulting AI system. The question of whether the changes are material enough to constitute a new system is one that the AI Office and national market surveillance authorities will resolve through guidance and, ultimately, enforcement decisions.

The second trigger is substantial modification of a high-risk AI system. Where a GPAI model is being used as a component of a high-risk AI system and the deployer substantially modifies that system, the deployer becomes the provider of the modified system under Article 25(1)(d). This trigger is most relevant for organisations deploying GPAI-based agents in the high-risk categories listed in Annex III, such as employment decisions, credit scoring, or essential services.

The third trigger, which is specifically relevant for GPAI, is when a deployer integrates a GPAI model into an AI system in a way that makes the integrated system capable of serving a wide range of purposes, such that the integrated system itself meets the definition of a GPAI model under Article 3(63). Article 3(63) defines a GPAI model as an AI model that displays significant generality and that is capable of competently performing a wide range of distinct tasks. A deployer who takes a foundation model and builds a multi-task AI platform serving their own customers may have produced a second GPAI model.

Article 52: baseline obligations for all GPAI providers

Article 52 applies to all providers of GPAI models regardless of their systemic risk classification. It contains four obligations that are now in force.

The first is the preparation and maintenance of technical documentation. The documentation requirements for GPAI models are set out in Annex XI of the regulation. They cover the model's training methodology, training data, architecture, evaluation results, capabilities and limitations, and the information necessary for downstream providers to build compliant systems on top of the model. The documentation is not a marketing document. It is a technical record that the AI Office can request and that downstream providers rely on to fulfil their own obligations.

The second obligation is to provide information and documentation to downstream providers who integrate the GPAI model into their AI systems. The information must be sufficient to allow the downstream provider to understand the capabilities, limitations, and risks of the model and to discharge their own obligations under the regulation. This creates a chain of documentation responsibility running from the foundation model provider through to the deployer of the final system.

The third obligation is to have in place a policy that complies with Union copyright law, in particular with Directive (EU) 2019/790 on copyright in the digital single market. GPAI providers must document their approach to copyright compliance in the training data and make this information publicly available. The EU AI Act does not itself resolve the underlying copyright questions about AI training, but it requires providers to have an articulated and documented position.

The fourth obligation is to publish a sufficiently detailed summary of the content used for training, in accordance with a template published by the AI Office. This training data transparency summary must be made publicly available and must allow regulators and the public to understand the categories of data on which the model was trained.

Article 53: systemic risk obligations

Article 53 applies to providers of GPAI models with systemic risk. The primary classification criterion under Article 51(1) is training compute: a model trained with more than 10^25 floating-point operations is presumed to pose systemic risk. The AI Office can also designate a model as presenting systemic risk based on its capabilities or the breadth of its deployment even if it does not exceed the compute threshold.

The Article 53 obligations are more demanding. First, the provider must perform and document adversarial testing, commonly described as red-teaming, in accordance with state-of-the-art methodologies. The testing must probe for the model's most significant systemic risks, including CBRN (chemical, biological, radiological, and nuclear) content generation, cyberattack facilitation, and broad societal influence capabilities. The GPAI Code of Practice provides methodological guidance on what constitutes adequate adversarial testing.

Second, the provider must notify the AI Office of serious incidents, defined in Article 3(49) as incidents that cause or may cause death, serious injury, or serious adverse effects on health, safety, or fundamental rights, and of corrective measures taken in response. The notification obligation runs to the AI Office, not to national market surveillance authorities, which reflects the EU-level governance role for systemic risk GPAI models.

Third, the provider must ensure adequate cybersecurity protection for the model and the infrastructure supporting it. The cybersecurity obligation cross-references the standards and guidelines that apply to critical digital infrastructure in the EU, including those under the NIS 2 Directive (Directive (EU) 2022/2555).

Fourth, the provider must report to the AI Office on the energy consumption of the model in training and in inference operations. Energy reporting requirements reflect the EU's broader concern about the environmental footprint of large-scale AI models.

The deployment chain and documentation. When a deployer integrates a commercial GPAI model into an agent product, they depend on Article 52(2) information from the GPAI provider to fulfil their own obligations. If the GPAI provider has not supplied adequate technical documentation and capability information, the deployer cannot fully document the risks of the integrated system. Deployers should confirm contractually that their GPAI model provider has fulfilled Article 52 obligations and will maintain and update documentation as the model evolves. For most commercial API deployments of frontier models, the major providers have published GPAI compliance documentation aligned with the Code of Practice.

The GPAI Code of Practice

Article 56 of Regulation (EU) 2024/1689 requires the AI Office to facilitate the elaboration of codes of practice covering the obligations under Articles 52 and 53. The GPAI Code of Practice has been developed through a structured multi-stakeholder process convened by the AI Office, with working groups on transparency, safety evaluation, incident reporting, and copyright compliance. The Code entered its operational phase in early 2026 following two rounds of public consultation.

The significance of the Code is that compliance with it creates a presumption of conformity with Articles 52 and 53. Under Article 56(4), providers that comply with approved codes of practice benefit from a presumption of conformity with the corresponding provisions of the regulation. This is the same presumption of conformity mechanism used in harmonised standards under European product law. Providers that choose not to follow the Code must demonstrate compliance by alternative means, which in practice requires more extensive direct engagement with the AI Office and market surveillance authorities.

The Code also provides methodological detail on adversarial testing under Article 53, setting out the risk categories that must be tested, the minimum scope of testing, the documentation requirements for test results, and the criteria for what constitutes an adequate mitigation measure. Providers who have followed the Code's adversarial testing methodology have a clear evidentiary record for demonstrating Article 53 compliance.

Implications for deployers of GPAI-based agent products

For a deployer whose AI agent product is built on a GPAI model via API, the immediate obligations are primarily under the high-risk and transparency provisions of the regulation rather than under Chapter V. The deployer's Article 26 obligations apply if the agent is a high-risk system. The Article 50 transparency obligation applies if the agent interacts with natural persons. For the connection between GPAI model use and Article 50 transparency labelling, see the Article 50 transparency labelling guide.

Where the deployer has fine-tuned the GPAI model, deployed it under their own brand as a distinct AI system, or substantially modified it, the Article 25 analysis is required. The practical question is whether the post-modification system retains the character of the original GPAI model or constitutes a new system. The AI Office's guidance in the first half of 2026 is beginning to address this question, but the regulatory answer will ultimately depend on case-by-case assessment of the nature and extent of the modification.

The prudent posture for a deployer who has fine-tuned a GPAI model is to assume Article 25 applies and to prepare the Article 52 documentation as if they are a GPAI provider. This is a more conservative approach than the minimum legally required in all circumstances, but it is substantially less expensive than discovering during an AI Office investigation that provider obligations were triggered without the documentation to demonstrate compliance. For the documentation structure that covers both the deployer Article 26 file and a potential Article 52 GPAI file, see documenting AI agent risk management for compliance and the operator obligations compliance guide.

Frequently asked questions

When did GPAI model obligations under the EU AI Act take effect?

Article 113(2) of Regulation (EU) 2024/1689 provides that Articles 51 to 56 apply from 2 August 2025. GPAI providers have been under obligation since that date. The proposed Digital Omnibus delay does not affect this timeline.

Does deploying a product built on a GPAI model make the deployer a GPAI provider?

Not automatically. Using a GPAI model via API without modification does not trigger provider obligations. Fine-tuning, deploying under a proprietary brand as a distinct AI system, or substantially modifying the model may trigger the Article 25 provider transition. Each situation requires a specific assessment against Article 25(1).

What is the 10^25 FLOPs threshold and why does it matter?

Article 51(1) presumes systemic risk for models trained with more than 10^25 FLOPs. Models above this threshold face additional Article 53 obligations including adversarial testing, incident notification to the AI Office, and cybersecurity measures. The Commission can update the threshold by delegated act.

What is the GPAI Code of Practice?

Article 56 requires the AI Office to facilitate development of codes of practice for Articles 52 and 53. Compliance with the Code creates a presumption of conformity with those provisions. The Code entered its operational phase in early 2026 and provides detailed methodological guidance on adversarial testing, transparency, and incident reporting.

References

  1. Regulation (EU) 2024/1689 (Artificial Intelligence Act), Articles 3(63), 25, 51, 52, 53, 54, 55, 56, 113(2). OJ L, 12 July 2024.
  2. Regulation (EU) 2024/1689, Annex XI, technical documentation requirements for GPAI models.
  3. Directive (EU) 2019/790 on copyright in the digital single market.
  4. Directive (EU) 2022/2555 on measures for a high common level of cybersecurity (NIS 2 Directive).
  5. European AI Office. GPAI Code of Practice. Multi-stakeholder process, 2024 to 2026.
  6. European AI Office. Guidelines on the Article 3(63) definition of general-purpose AI model. 2025.
  7. European AI Office. Adversarial testing methodology guidance for systemic risk GPAI models. 2025.
  8. Regulation (EU) 2024/1689, Article 3(49), definition of serious incident.
  9. European AI Office. Guidance on Article 25 provider transition obligations. 2026.