Article 17 is the starting point of every deployer's compliance file, even though the deployer is not the one who writes it. Every obligation in Article 26 depends on the deployer having received, understood, and acted on the technical documentation the provider is required to produce. This analysis explains what that documentation must contain, how to evaluate whether what you have received is adequate, and what contractual steps to take when it is not.

Key takeaways

  • Article 17 requires providers to draw up technical documentation before placing a high-risk AI system on the market. The documentation must follow the structure specified in Annex IV of Regulation (EU) 2024/1689.
  • Deployers are not directly obligated to produce technical documentation, but they cannot comply with Article 26(1), (2), or (3) without having received and understood it.
  • Annex IV requires six substantive categories of information: general description, design specification, data governance, testing and validation, risk measures, and instructions for use. Each category has practical checklist implications for deployers evaluating what they have received.
  • Where provider documentation is generic or inadequate for the specific deployment context, deployers should formally request supplementary documentation. Proceeding without it is a compliance risk with penalty exposure under Article 99.
  • Technical documentation is also the primary evidence insurers need to underwrite AI liability policies. Deployers who cannot produce or access it face both a compliance gap and a coverage gap.

Why Article 17 matters to deployers

The structure of the EU AI Act creates a division of labour between providers and deployers. Providers bear primary responsibility for the design, testing, and documentation of high-risk AI systems. Deployers bear responsibility for how those systems are used, monitored, and overseen in practice. This division makes practical sense because providers have access to the system's architecture, training data, and validation results that deployers typically do not.

The problem is that the deployer's obligations under Article 26 are only actionable if the deployer has received what the provider was required to produce. Article 26(1) requires deployers to use the system in accordance with the provider's instructions for use. You cannot comply with instructions you have never received. Article 26(2) requires deployers to assign oversight to persons with the competence, training, and authority needed to understand the system's behaviour. You cannot train oversight personnel without knowing what the system does and what its known failure modes are. Article 26(3) requires monitoring on the basis of the instructions for use. Effective monitoring requires knowing what performance looks like and what a deviation from expected behaviour means in practice.

Every deployer compliance programme therefore has a prerequisite: obtain, review, and act on the provider's Article 17 technical documentation. This is not optional and it is not a formality. It is the document that makes the rest of Article 26 achievable.

What Article 17 and Annex IV require

Article 17(1) requires providers to draw up technical documentation before placing a high-risk AI system on the market or putting it into service. The documentation must be maintained and updated throughout the system's lifetime. Annex IV specifies the contents. A deployer evaluating the documentation they have received should check for the presence and adequacy of each of the following six categories.

Category 1: General description

The documentation must include a general description of the AI system covering its intended purpose, the name and version of the system, a description of the hardware it is intended to run on, and a description of the software interfaces and dependencies. It must also describe the interaction with external tools, data, and other AI systems where relevant. For deployers, this category tells them whether the system described is the system they are actually using. Version mismatches between what the provider documented and what the deployer is running in production are a compliance failure that is typically invisible without explicit version-checking at procurement and at each update.

Category 2: Detailed design specification

The second category requires a description of the AI techniques and approaches used, the system's architecture, and the design choices made. For deployers who are not AI engineers, this section is less immediately actionable, but it establishes the baseline against which the provider's risk management and testing claims can be evaluated. A system whose design is entirely opaque at this level is a system whose conformity assessment cannot be independently evaluated.

Category 3: Training data governance

Annex IV requires a description of the training, validation, and testing data used. This covers the datasets employed, their provenance, the data governance practices applied (including filtering, cleaning, and labelling), and any relevant characteristics of the data population that affect the system's performance in specific contexts. For deployers whose users include populations not well represented in the training data, this section of the documentation is where performance limitations relevant to their specific deployment should appear. An absence of information about data governance in this section is a material gap that deployers should formally raise with the provider.

Category 4: Testing and validation results

The documentation must include a description of the testing and validation procedures applied and the results. This includes accuracy metrics, robustness benchmarks, and performance data for the system in the conditions and on the populations for which it is intended. Importantly, Annex IV requires disclosure of performance on specific sub-groups where relevant to the high-risk use case. A system used in employment decisions, for example, should disclose whether performance has been tested across demographic groups and what the findings were.

For deployers, this section is the empirical foundation of the instructions for use. If the provider's documentation shows strong average performance but does not disclose performance variation across sub-groups relevant to the deployment context, the documentation is incomplete. Deployers should request clarification before putting the system into service in contexts where differential performance has legal or fundamental rights implications.

Category 5: Risk management measures

Article 17(2) connects the technical documentation obligation directly to the risk management system required under Article 9. Annex IV requires documentation of the risk management measures taken and the residual risks identified. For deployers, this is critical reading: it tells them what the provider assessed the risks to be and what mitigations were built in at the design level. Residual risks are the risks that remain after the provider's mitigations. The deployer's own risk management under Article 9, and the monitoring procedure under Article 26(3), should be calibrated to address these residual risks explicitly.

Category 6: Instructions for use

The instructions for use are, in one sense, a separate document from the Annex IV technical documentation. But Article 13 requires them to accompany the system and Annex IV requires a summary to be included in the technical documentation. The instructions must cover the identity and contact details of the provider, the system's capabilities and limitations, the accuracy and robustness characteristics relevant to deployers, any known or foreseeable circumstances that may lead to risks, the technical measures for human oversight, the expected operational lifetime, and the maintenance and care measures necessary to maintain the system's performance.

The instructions for use are the operational document that Article 26(1) requires deployers to follow. Their quality directly determines whether compliance with Article 26 is achievable. Generic instructions that do not address the specific deployment context, the expected user population, or the sector-specific risks are not adequate. A financial services deployer using a credit-scoring system should have instructions that address the relevant regulatory context, the applicable accuracy standards, and the human oversight measures appropriate for the scale of automated decision-making involved.

Evaluating whether what you received is adequate

Providers vary significantly in the quality of documentation they produce. Some have invested in comprehensive technical documentation that genuinely supports deployer compliance. Others provide marketing materials dressed as instructions for use. A deployer evaluating the adequacy of what they have received should ask four questions.

First: does the documentation cover all six categories in Annex IV? This is the minimum structural test. Documentation that omits a category is incomplete on its face. Request the missing categories before proceeding.

Second: is the documentation specific to the version you are deploying? Generic documentation that describes a model class rather than a specific version, or instructions written for a different deployment context than your own, do not satisfy the Article 13 and Annex IV requirements. Version-specific documentation is a contractual right deployers should establish before procurement.

Third: does the documentation address the specific population and context of your deployment? Accuracy benchmarks derived from a different user population than the one you serve, or instructions written for a different sector than your own, may not adequately capture the risks that Article 26 requires you to manage. Request supplementary documentation describing performance and risk in your specific context.

Fourth: do the residual risks identified by the provider correspond to the monitoring obligations you can actually implement? A risk that is residual on the provider's assessment is a risk you must monitor for. If the provider has identified residual risks that your organisation lacks the technical capability to monitor, this is a procurement-stage finding that should affect the deployment decision.

The contractual dimension: what to include in procurement agreements

Deployers purchasing high-risk AI systems should treat documentation access as a contractual prerequisite, not a post-procurement request. The procurement contract should specify several protections.

Access to Annex IV documentation at the time of first deployment is a basic requirement. Providers who resist supplying this documentation at procurement stage are providers whose compliance posture is uncertain. This matters because if the provider is not in compliance with Article 17, the deployer faces a compounded risk: their own compliance depends on documentation that may not have been produced to the required standard.

Notification of material updates is a continuing right. Article 17(1) requires providers to update technical documentation when the system is substantially modified. Deployers should have a contractual right to receive updated documentation and a defined window to review it before any update is deployed in their environment. Operating on a materially updated system without reviewing updated documentation is an Article 26(1) risk.

A right to request supplementary documentation is important for context-specific deployments. Standard Annex IV documentation covers the general case. A deployer in a specific regulated sector, or one deploying to a population with particular characteristics, may need supplementary information to build an adequate compliance programme. The contract should establish that the provider will respond to reasonable supplementary documentation requests within a defined period.

Finally, documentation retention should be addressed. Article 17(1) requires providers to retain technical documentation for ten years following placement on the market. Deployers should retain their copy of the documentation they received, dated and version-referenced, for the same period. This retained record is both a compliance file and, in the event of an incident, part of the evidence record that demonstrates the deployer acted in accordance with the information available to them.

The insurance dimension

Technical documentation is not only a regulatory file. It is the primary evidence document that AI liability insurers require before writing a policy. Insurers working from the AIUC-1 standard, the Munich Re aiSure framework, and the emerging coverage criteria at Armilla all need to know what the system was designed to do, under what conditions, with what limitations, and what monitoring is in place. The Annex IV documentation structure is, in effect, the insurer's underwriting questionnaire answered in advance.

Deployers who can produce the documentation they received at procurement, together with evidence of the monitoring and oversight procedures they established in response to it, present a materially lower underwriting risk than those who cannot. The documentation gap is one of the primary reasons European AI underwriters currently find it difficult to price policies. Deployers who close this gap through rigorous procurement practice position themselves for coverage before the mainstream market arrives.

For a structured view of how the seven dimensions of the Agent Certified methodology map to the documentation requirements in Annex IV, see the Agent Certified site. For coverage infrastructure that is being built in advance of the 2026 enforcement deadlines, see agentinsured.eu.

Frequently asked questions

Does Article 17 apply to deployers or only to providers?

Article 17 is addressed to providers of high-risk AI systems. Providers must draw up technical documentation before placing the system on the market. Deployers are not directly required to produce this documentation, but they cannot comply with Article 26(1), (2), or (3) without having received and understood it. The deployer's obligation is to demand it, review it, and act on it.

What must Article 17 technical documentation contain?

Annex IV of Regulation (EU) 2024/1689 specifies the required content across six categories: general description including intended purpose and version, detailed design specification, training and data governance, testing and validation results with accuracy benchmarks, risk management measures and residual risks, and instructions for use. Each category has specific content requirements. Documentation that omits a category is structurally incomplete.

What if a provider refuses to supply technical documentation?

A provider who will not supply Annex IV documentation is a provider whose compliance posture is uncertain. Article 13 requires providers to accompany high-risk systems with instructions for use. Refusal to supply these is a provider-side compliance failure. Deployers should document the refusal and consider whether to proceed with the deployment. Proceeding without adequate documentation is a deployer-side compliance risk under Article 26(1).

What happens if the provider's documentation is generic rather than deployment-specific?

Generic documentation that does not address the specific deployment context may not satisfy the Article 26(1) requirement to use the system in accordance with adequate instructions. Deployers should formally request supplementary documentation addressing their specific population, sector, and use case. Document the request and the response. This record is part of the deployer's compliance file.

How does technical documentation connect to AI insurance?

Insurers underwriting AI liability policies require governance evidence before writing coverage. Technical documentation is the primary evidence that the system was assessed, tested, and documented to a defined standard. Deployers with adequate documentation are more insurable than those without. For the current landscape of available AI liability coverage in Europe, see the coverage framework analysis on agentinsured.eu.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council, Article 17, Technical documentation.
  2. Regulation (EU) 2024/1689, Annex IV, Technical documentation referred to in Article 17(1).
  3. Regulation (EU) 2024/1689, Article 13, Transparency and provision of information to deployers.
  4. Regulation (EU) 2024/1689, Annex XIII, Instructions for use referred to in Article 13(3).
  5. Regulation (EU) 2024/1689, Article 9, Risk management system.
  6. Regulation (EU) 2024/1689, Article 26, Obligations of deployers of high-risk AI systems.
  7. Regulation (EU) 2024/1689, Article 99, Penalties.
  8. Regulation (EU) 2024/1689, Article 71, EU database for high-risk AI systems.
  9. AIUC-1, AI Insurance Underwriting Standard, AI Underwriting Company, 2025.
  10. Munich Re, aiSure AI performance insurance framework, Special Enterprise Risks division, 2025.