Article 26 is addressed to deployers, not to providers. It creates obligations that are independent of what the provider has or has not done, and it cannot be delegated back to the provider by contract. Understanding each paragraph in sequence is essential for any deployer building a compliance programme for high-risk AI systems ahead of the enforcement deadline.
Key takeaways
- Article 26 contains eight categories of deployer obligation. Each one is independent and carries its own compliance requirement, document, and potential penalty exposure.
- Article 26(1) requires deployers to use high-risk AI systems strictly in accordance with the provider's instructions for use. Use outside that scope is a breach of Article 26(1) and may trigger provider-equivalent obligations under Article 25.
- Article 26(5) requires deployers to notify the relevant market surveillance authority of serious incidents without undue delay. This is not voluntary. The notification duty runs alongside the separate duty under Article 26(6) to notify the provider.
- Article 26(9) requires a fundamental rights impact assessment before first use for public-body deployers and deployers in high-exposure contexts including employment, credit, insurance, and essential services.
- The full Article 26 file requires at minimum: instructions compliance record, oversight register, monitoring procedure, incident log, log retention record, person-notification procedure, FRIA where applicable, and database registration record.
Article 26(1): Using the system according to instructions
The first obligation is foundational. Deployers must take appropriate technical and organisational measures to ensure they use high-risk AI systems in accordance with the instructions for use accompanying those systems. The instructions for use are a statutory document that providers must supply under Article 13 and Annex XIII of Regulation (EU) 2024/1689.
The practical content of this obligation has three layers. First, the deployer must obtain the instructions for use before putting the system into service. This requires formal procurement steps that many organisations currently skip in favour of informal onboarding. Second, the deployer must read and understand them at an organisational level: the instructions must be reviewed by persons with both operational and compliance authority, not filed without review. Third, the deployer must establish controls that actually constrain operation to the permitted scope: policies, access controls, and monitoring that prevent use outside the conditions the provider assessed.
Where a provider supplies instructions for use that are too generic to apply to a specific deployment context, the deployer should request supplementary documentation. A deployer who relies on instructions that are clearly inadequate for their actual use case cannot claim that they fulfilled Article 26(1) because they followed the instructions they received. The obligation is to use the system in accordance with adequate instructions, not to use whatever instructions were supplied regardless of their quality.
Article 26(2): Human oversight
Article 26(2) requires deployers to assign the task of overseeing the high-risk AI system to natural persons who have the necessary competence, training, authority, and support. This provision translates the design requirement in Article 14 into a staffing obligation for deployers.
The four terms in Article 26(2), competence, training, authority, and support, are not interchangeable. A person with competence and training who lacks organisational authority to halt the system is not adequate oversight. A person with authority who lacks the training to recognise a dysfunction is not adequate oversight. All four elements must be present in each person assigned oversight responsibility.
The oversight register is the document that proves Article 26(2) compliance. It names the persons assigned, their qualifications for each Article 14(4) capability, the tools and access rights they have been given, and the escalation path to a decision maker who can halt the system. For a detailed analysis of Article 14 and the oversight design requirement that Article 26(2) builds on, see the Article 14 human oversight analysis.
Article 26(3): Monitoring operation
Article 26(3) requires deployers to monitor the operation of high-risk AI systems on the basis of the instructions for use. Monitoring is not limited to checking whether the system is functioning technically. It encompasses reviewing whether the system's outputs in actual deployment are consistent with its intended purpose, whether performance is degrading over time, and whether the population of inputs the system is processing in deployment matches the conditions under which it was assessed.
The requirement to monitor on the basis of the instructions for use connects Article 26(3) directly back to Article 26(1). A deployer who has not adequately understood the instructions for use cannot perform meaningful monitoring, because the instructions define the expected behaviour against which monitoring compares actual performance.
A monitoring procedure should specify: what indicators are tracked, at what frequency, by whom, what thresholds trigger an escalation, and how deviations are documented. The frequency must be proportionate to the risk level of the system. A system making consequential employment decisions in real time requires tighter monitoring intervals than one producing recommendations reviewed by a human before any action is taken.
Article 26(4): Notifying providers of issues
Article 26(4) requires deployers to inform providers, and where relevant distributors, when they have reasons to believe that use of the system in accordance with its instructions for use may present a risk to health, safety, or fundamental rights. This is a proactive duty. It does not require the risk to have materialised into harm. A reasonable belief that a risk exists based on what the deployer observes in operation is sufficient to trigger the notification obligation.
This provision is practically important because it requires deployers to maintain communication channels with providers beyond the initial procurement stage. A deployer who has no mechanism for reporting operational concerns back to the provider cannot satisfy Article 26(4). The supply agreement should include a defined escalation procedure for safety-relevant operational observations, with a documented contact point at the provider.
Article 26(5) and 26(6): Serious incident reporting
Article 26(5) requires deployers who become aware of serious incidents involving their high-risk AI system to report to the relevant national market surveillance authority without undue delay. Article 26(6) creates a parallel duty to report serious incidents to the provider.
A serious incident is defined in Article 3(49) as any incident or malfunction that, directly or indirectly, leads or may lead to the death of a person, serious injury, or serious adverse effects on health, safety, or fundamental rights. The definition is forward-looking as well as retrospective: it includes incidents that may lead to these outcomes, not only those that have already done so.
The "without undue delay" standard does not set a fixed timeframe in Article 26. Sector-specific legislation may narrow this. EIOPA's supervisory guidance for European insurers using AI, published in August 2025, sets an expectation of prompt notification consistent with the notification timelines that apply to IT incidents under DORA. For most enterprises, a target of 72 hours from confirmed identification is a reasonable baseline that aligns with existing incident response frameworks.
The incident log is the document that demonstrates compliance with this obligation. It must record the incident date, the nature of the incident, the outputs that triggered it, the persons affected, the steps taken in response, the date of notification to the provider and the authority, and the reference number or confirmation received from each. Incidents that were assessed and found not to meet the serious incident threshold should also be logged, with the reasoning recorded.
Article 26(7): Log retention
Article 26(7) requires deployers to retain the logs automatically generated by the high-risk AI system for the period specified in the Union or national law applicable to the use case, or at minimum for six months. This obligation applies where the deployer has control over the logs. Where a provider retains logs as part of a cloud service arrangement, the deployer must contractually confirm retention obligations and access rights.
The logs are the primary evidence base for any supervisory investigation. A deployer who cannot produce logs is in an evidentially weak position from the opening of any inquiry. The log retention obligation intersects with GDPR obligations where logs contain personal data: retention beyond the minimum lawful period requires a lawful basis, and deployers should ensure their log retention policy addresses this intersection.
Article 26(8): Informing affected persons
Article 26(8) requires deployers to inform natural persons subject to a high-risk AI system decision that they are interacting with such a system where the decision is consequential. In employment contexts, this means workers and candidates must be informed when AI is used to make or substantially influence hiring, promotion, performance assessment, or termination decisions. In credit and insurance contexts, it means applicants must be informed when AI substantially determines the outcome of their application.
This provision operates in parallel with, and does not replace, the individual rights provisions under GDPR Articles 13, 14, 22, and the transparency obligations under Article 50 of the EU AI Act. Deployers must map all three sets of obligations and ensure their disclosure procedures satisfy all of them simultaneously. In many cases, a redesigned adverse-action notice or a new section in the privacy notice will address most of the practical requirements.
Article 26(9): Fundamental rights impact assessment
Article 26(9) is the provision with the most significant new documentation requirement for a subset of deployers. It requires a fundamental rights impact assessment before putting a high-risk AI system into service where the deployer is a public-sector body, or a private deployer using the system in credit, insurance, employment, or essential services contexts.
The FRIA must assess the impact on persons or groups of persons who may be affected by the system, covering the risk of algorithmic discrimination, data protection implications, and any broader impact on the rights and freedoms guaranteed under the Charter of Fundamental Rights of the European Union. The assessment must be performed before first use and updated on any material change to the system, the use case, or the population affected.
The content requirements for the FRIA are set out in Article 27. They include a description of the system and its purpose, the deployer's assessment of potential risks to fundamental rights, the mitigation measures adopted, and the outcome of the assessment including the conclusion on whether to proceed with deployment. For a step-by-step FRIA guide, see the FRIA guide under Article 27.
Article 26(10): Database registration
Article 26(10) requires certain deployers to register their use of high-risk AI systems in the EU database managed by the AI Office under Article 71 before putting the system into service. The registration obligation applies to deployers of systems in Annex III, point 1, covering remote biometric identification, and to public-authority deployers of systems listed elsewhere in Annex III.
The registration information required is set out in Annex VIII, Part II. It includes the deployer's name and contact information, the system's registration number in the EU database, and the intended use in the deployer's specific deployment context. Registration must be updated if the use changes materially. The EU database became operational in 2025 and is accessible through the EU AI Office portal.
Building the complete Article 26 compliance file
Each paragraph of Article 26 corresponds to a document or a documented procedure. The minimum compliance file for a deployer of a high-risk AI system contains: a record of instructions for use receipt and review, an oversight register, a monitoring procedure with threshold definitions, an incident log, a log retention policy with retention confirmation from the provider, a person-notification procedure integrated with the relevant workflows, a FRIA for deployers within the Article 26(9) scope, and a database registration confirmation.
None of these documents needs to be long. Each needs to be accurate, complete, and available to a supervisory authority on request within the timeframe that the relevant national competent authority sets. The market surveillance authorities being stood up across EU member states in 2026 are beginning to publish guidance on the format and content they expect. The German Federal Office for Artificial Intelligence and the Netherlands AI Authority have both issued preliminary statements indicating a document-first approach to initial investigations.
For the connection between the Article 26 file and insurance coverage, see preparing an AI agent underwriting submission for European insurers. The documentation that satisfies Article 26 and the documentation that enables an insurance policy are substantially the same set of records.
For context on how Article 26 fits within the broader operator file, see the operator obligations compliance guide and the Article 9 risk management system analysis.
Frequently asked questions
What are the main obligations of deployers under Article 26 of the EU AI Act?
Article 26 places eight categories of obligation on deployers of high-risk AI systems: using the system in accordance with the provider's instructions; assigning competent oversight persons; monitoring operation; notifying providers of risks; reporting serious incidents to authorities; retaining logs; informing affected persons; conducting a FRIA where required; and registering in the EU database where applicable.
Do Article 26 obligations apply to all AI systems or only high-risk ones?
Article 26 applies only to deployers of high-risk AI systems as defined in Article 6 and Annex III of Regulation (EU) 2024/1689. Systems outside those categories may still be subject to transparency obligations under Article 50 but are not subject to the full Article 26 duty-set.
Who must conduct a fundamental rights impact assessment under Article 26?
Article 26(9) requires a FRIA before first use where the deployer is a public body, or where the deployer uses the system in biometric categorisation, critical infrastructure, employment, credit, insurance, or essential services. Not all deployers face this obligation. It targets contexts where fundamental rights exposure is highest.
What happens if a deployer fails to notify a serious incident under Article 26?
Failure to notify the market surveillance authority of a serious incident falls within the second tier of Article 99 penalties: up to EUR 15 million or 3 per cent of worldwide annual turnover. The obligation to notify the provider under Article 26(6) is separate and runs concurrently.
Must deployers register in the EU AI database before using a high-risk system?
Article 26(10) requires certain deployers, covering biometric identification system deployers and public-authority deployers, to register in the EU database before first use. Registration information is defined in Annex VIII. The EU database is managed by the AI Office under Article 71.
References
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (Artificial Intelligence Act), OJ L, 12.7.2024.
- Article 26, Regulation (EU) 2024/1689, obligations of deployers of high-risk AI systems.
- Article 6 and Annex III, Regulation (EU) 2024/1689, high-risk AI system classification.
- Article 13 and Annex XIII, Regulation (EU) 2024/1689, transparency and instructions for use.
- Article 14, Regulation (EU) 2024/1689, human oversight of high-risk AI systems.
- Article 25(1)(d), Regulation (EU) 2024/1689, when deployers become providers.
- Article 27, Regulation (EU) 2024/1689, fundamental rights impact assessment.
- Article 71, Regulation (EU) 2024/1689, EU database for high-risk AI systems.
- Annex VIII, Regulation (EU) 2024/1689, information for database registration.
- Article 73, Regulation (EU) 2024/1689, incident reporting to market surveillance authorities.
- Article 99, Regulation (EU) 2024/1689, penalties for non-compliance.
- Regulation (EU) 2016/679 (GDPR), Articles 13, 14, and 22.
- European Insurance and Occupational Pensions Authority. Opinion on artificial intelligence governance in the insurance and occupational pensions sectors. August 2025.