Article 9 sits in Chapter III, Section 2 of Regulation (EU) 2024/1689. It applies to providers of high-risk AI systems but its outputs feed directly into deployer obligations. A deployer that understands what Article 9 requires can verify whether the systems they procure have been properly assessed, whether the instructions for use they receive are adequate, and whether they are operating within the risk parameters the provider documented. Those are not abstract compliance questions. They are the difference between defensible and indefensible deployment.
Key takeaways
- Article 9 requires a risk management system that is continuous, iterative, and updated throughout the entire lifecycle of a high-risk AI system, not completed once at the point of development.
- The system must identify known and reasonably foreseeable risks, evaluate residual risks after mitigation, and document the measures adopted. This documentation forms the basis of the instructions for use that deployers receive.
- Deployers interact with Article 9 indirectly but consequentially: operating outside the risk parameters the provider documented puts the deployer in breach of Article 26(1) and potentially triggers provider obligations under Article 25(1)(d) if the use constitutes a substantial modification.
- A deployer who cannot obtain adequate Article 9 documentation from a provider has a procurement problem, a compliance problem, and an insurance problem. Insurers treat the absence of a documented risk management system as a material underwriting risk.
- Penalties for non-compliance with Article 9 obligations fall in the highest tier of Article 99: up to EUR 30 million or 6 per cent of worldwide annual turnover for providers, and up to EUR 15 million or 3 per cent for deployers breaching Article 26.
The structure of Article 9
Article 9 opens by stating that high-risk AI systems shall be subject to a risk management system. The word "system" is deliberate. The provision does not require a risk assessment in the traditional regulatory sense of a document produced before deployment. It requires an ongoing organisational capability: a set of processes, roles, records, and review cycles that operate continuously across the system's operational life.
Article 9(1) establishes the continuity requirement. The risk management system must be a continuous and iterative process running throughout the entire lifecycle, requiring regular systematic review and updating. Each iteration must examine whether the risk measures in place remain effective, whether new risks have emerged from operation in the real world, and whether changes to the system or its deployment context have altered the residual risk profile.
Article 9(2) addresses identification. The provider must identify and analyse all known risks and reasonably foreseeable risks to health, safety, or fundamental rights that the system may pose. The phrase "reasonably foreseeable" includes misuse scenarios: ways in which persons with ordinary competence might predictably misuse or adapt the system in ways that generate harm, even if those uses are contrary to the instructions for use.
The identification obligation extends to risks that arise from the interaction of the AI system with other systems in the deployment environment, not only from the AI system operating in isolation. An agent that takes actions inside an enterprise software environment can generate risks through those interactions that do not appear when the agent is tested alone. Those interaction risks are within scope.
Risk evaluation and the residual risk standard
Article 9(3) requires the provider to estimate and evaluate the risks that may emerge when the system is used as intended, or under reasonably foreseeable conditions of misuse. The evaluation must take into account the severity and reversibility of potential harm, the breadth of harm across the number of persons potentially affected, and the degree to which the harm would be difficult to reverse once it occurred.
These three parameters create a prioritisation framework. A risk that affects many people with severe and irreversible consequences warrants the highest level of risk management attention. A risk that affects a small number of people with minor and correctable consequences warrants proportionate attention at a lower level. The proportionality principle in Article 9 is not a licence to ignore low-severity risks. It is a framework for allocating the depth of mitigation effort across a risk register.
The concept of residual risk appears at Article 9(4). Providers must adopt appropriate risk management measures, and they must evaluate whether the residual risk after those measures remain within acceptable levels for the use case. The acceptable level of residual risk is not defined in the text of the regulation. It is determined by reference to the state of the art, the purpose of the system, the severity of potential harms, and where applicable, the relevant sector-specific safety standards referenced in Annex I.
For deployers, the residual risk documentation is the most operationally significant output of Article 9. It tells them what risk remains after the provider's mitigations, and therefore what additional controls they must apply at the deployment level through Article 26(1). A deployer who uses a system in a context with higher residual risk than the provider's assessment assumed has, in effect, changed the risk profile the system was assessed against. That is the boundary beyond which Article 25(1)(d) can apply.
Risk management measures in practice
Article 9(4) specifies the types of risk management measures that providers must adopt, listed in order of preference. Safety by design is the first preference: eliminate or reduce risks by making design choices that prevent the hazardous condition from arising. Where elimination is not technically feasible, mitigation at source is second: change the system's architecture, training, or output behaviour to reduce the frequency or magnitude of the risk. Where residual risk remains after source-level mitigation, information and instructions are the third layer: make the residual risk known to deployers and users through the instructions for use and technical documentation.
The preference order matters for deployers because it shapes what they can reasonably expect from a well-governed provider. A provider who has relied primarily on instructions for use to address a significant risk, rather than design-level mitigation, has a weaker Article 9 position than one who has exhausted technical mitigations first. Reviewing this ordering during procurement due diligence is a practical step that most compliance teams currently skip.
Recital 61 adds an important gloss. Risk management measures must not adversely affect compliance with applicable Union or national law when properly implemented. A mitigation that causes the system to produce discriminatory outputs as a consequence of how the risk was treated would not satisfy Article 9 even if it eliminated the specific risk it targeted.
Testing obligations under Article 9(6)
Article 9(6) imposes a testing obligation that goes beyond the general requirement to have a risk management process. The risk management measures must be tested to verify that they actually achieve their intended effect. Testing must be adequate and appropriate to the purpose of the system, and the testing methodology must be documented.
The standard for adequacy shifts with the deployment context. A system intended for use in safety-critical environments, or one that makes consequential decisions affecting many people, requires more rigorous testing than one used for lower-stakes purposes. Testing should include adversarial conditions, edge cases, and in-production scenarios where those cannot be adequately simulated in a controlled environment.
A specific provision in Article 9(6) requires that testing be performed using real-world conditions to the extent possible. This is a challenge for systems deployed in complex enterprise environments where the full range of real-world conditions cannot be replicated in a test environment. Providers have responded to this by offering beta access periods, phased rollout programmes, or post-deployment monitoring commitments that substitute for pre-deployment real-world testing in those cases.
For deployers, the question to put to any provider is whether the pre-deployment testing covered the specific use case and the specific deployment environment. A system tested for financial services use in the United States and then deployed by a European enterprise in an employment screening context has likely not been adequately tested for its European deployment. This is not a hypothetical risk. It is a pattern that appears regularly in the procurement of US-origin AI products by European enterprises.
Post-deployment obligations and the lifecycle requirement
Article 9 does not end at the point of first deployment. The continuous and iterative nature of the obligation means that providers must update the risk management system as new information becomes available from operation in the real world. Article 9(7) specifically requires that post-market monitoring data be incorporated into the risk management process.
This creates a feedback loop between the post-market monitoring obligations in Article 72 and the risk management system. Serious incidents and near-misses discovered through post-market monitoring must be fed back into the risk assessment. The measures must then be updated to address newly identified risks. The updated documentation must be provided to deployers in a form that allows them to adjust their deployment-level controls.
For deployers, this means the relationship with a provider does not end at procurement. A provider who discovers post-deployment risks and updates their risk management system has an obligation to communicate those changes to deployers who are affected. Deployers should confirm in their supply agreements that they will receive notification of material updates to the risk management system and updated instructions for use. Without this, a deployer who continues operating under outdated risk parameters may be in breach of Article 26(1) without knowing it.
When the deployer becomes a provider
Article 25(1)(d) is the provision that compliance teams often overlook until it becomes urgent. It provides that a deployer who substantially modifies a high-risk AI system is treated as the provider of the modified system for the purposes of the EU AI Act, and must comply with all provider obligations, including Article 9.
The term "substantial modification" is defined in Article 3(23) as a modification that alters the system's performance in a way that changes its compliance with essential requirements, or a change to the intended purpose of the system. Fine-tuning on proprietary data, integrating the system with other automated decision-making tools, changing the scope of autonomous actions the system is permitted to take, or deploying the system for a different purpose than that covered by the provider's conformity assessment are all modifications that could trigger this provision.
The practical consequence is that deployers who customise AI systems at the scale common in enterprise environments may be operating as providers without having recognised it. They would then need a risk management system of their own, conformity assessment, technical documentation, and registration in the EU database. This is the compliance scenario that creates the greatest gap between current enterprise practice and regulatory obligation.
What Article 9 means for insurance coverage
The risk management system is the document that makes AI agent insurance commercially possible. Insurers writing AI liability policies in Europe require evidence that the system being insured has been through a documented risk identification and mitigation process. Products from Munich Re aiSure, Armilla, and AIUC all require the insured to demonstrate an organised approach to AI risk management that closely maps to what Article 9 mandates.
AIUC-1, the certification standard published in 2025 that underpins ElevenLabs' February 2026 AI agent policy, requires evidence of a risk management process that includes identification of known risks, evaluation of mitigations, documentation of residual risk, and a monitoring process. This is Article 9 by another name. A deployer who has engaged seriously with the Article 9 documentation from their provider, supplemented it with a deployment-specific risk record, and established a monitoring process is already carrying most of what an AI insurer needs to underwrite a policy.
For the connection between compliance documentation and insurance eligibility, see preparing an AI agent underwriting submission for European insurers. For the certification framework that produces insurance-ready documentation systematically, see the FP Certified methodology.
Preparing a deployer's Article 9 file
Deployers cannot satisfy Article 9 on their own. The obligation sits with providers. What deployers can do is build a file that demonstrates they have engaged with the provider's Article 9 output and have applied appropriate deployment-level controls in response to it. That file has three components.
First, the instructions for use and any risk summaries received from the provider. These should be requested formally in writing, and the request and response retained. If the provider cannot supply them, or supplies a document so generic it provides no deployment-specific risk information, that is evidence of an inadequate Article 9 process on the provider's part, and a procurement decision that the deployer should document explicitly.
Second, a deployment-specific risk supplement. This is a deployer-produced document that identifies the additional risks created by the specific deployment context: the particular use case, the population of persons affected, the integration with other systems, and the monitoring infrastructure available. It should reference the provider's residual risk classification and describe how the deployer's controls respond to each residual risk.
Third, a review cycle commitment. The Article 9 system is iterative. The deployer's file should specify at what intervals the risk supplement will be reviewed, what triggers an unscheduled review (material changes to the system, a serious incident, new guidance from the national supervisory authority), and who is responsible for conducting and documenting the review.
For the full set of deployer obligations that this document fits within, see the operator obligations compliance guide. For the documentation architecture that supports the complete operator file, see how to document AI agent risk management for compliance.
Frequently asked questions
Does Article 9 of the EU AI Act apply to deployers directly?
Article 9 is primarily addressed to providers. Deployers interact with it indirectly through Article 26(1), which requires deployers to operate within the risk parameters and instructions the provider's Article 9 process produces. A deployer who substantially modifies a system becomes a provider under Article 25(1)(d) and must then satisfy Article 9 directly.
What documentation should a deployer receive from a provider in relation to Article 9?
The provider's Article 9 process produces the instructions for use, required under Article 13 and Annex XIII, which must be supplied to deployers. Deployers should also request a summary of the risk management measures adopted, the residual risk classification for their intended use case, and confirmation of the testing undertaken before deployment. The full technical documentation under Article 11 remains with the provider but the instructions for use are a statutory supply obligation.
When does a deployer become responsible for their own risk management process?
A deployer becomes responsible for a provider-equivalent risk management process when they substantially modify the system, as defined in Article 3(23), or use it for a purpose not covered by the conformity assessment. Under Article 25(1)(d), such a deployer becomes a provider for the modified system and must comply with all provider obligations, including Article 9.
How does the Article 9 risk management system relate to insurance underwriting?
Insurers evaluating AI agent coverage treat the risk management system as the foundational underwriting document. Products such as Munich Re aiSure and the AIUC-1 standard require evidence of an iterative risk management process, documented residual risk, and post-market monitoring. A deployer who can present this documentation is already carrying most of what an insurer needs to price a policy.
What testing does Article 9 require before a high-risk AI system can be deployed?
Article 9(6) requires testing to verify that the risk management measures adopted actually achieve their intended effect. Testing must be adequate to the purpose of the system and must include real-world conditions to the extent possible. Deployers should ask providers to confirm that pre-deployment testing covered the specific use case and context they intend to operate in.
References
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 12.7.2024.
- Article 9, Regulation (EU) 2024/1689, risk management system requirements for high-risk AI systems.
- Article 3(23), Regulation (EU) 2024/1689, definition of substantial modification.
- Article 13 and Annex XIII, Regulation (EU) 2024/1689, transparency and instructions for use.
- Article 25(1)(d), Regulation (EU) 2024/1689, when deployers become providers.
- Article 26(1), Regulation (EU) 2024/1689, deployer obligations regarding instructions for use.
- Article 72, Regulation (EU) 2024/1689, post-market monitoring obligations.
- Article 99, Regulation (EU) 2024/1689, penalties for non-compliance.
- Recital 61, Regulation (EU) 2024/1689, proportionality in risk management.
- AIUC-1 AI Agent Certification Standard, Artificial Intelligence Underwriting Company, 2025.
- Munich Re aiSure product documentation, parametric AI performance insurance, 2025.
- ISO/IEC 42001:2023, Artificial intelligence management system.