The transparency provisions of the EU AI Act are not a disclosure regime tacked onto the side of a risk-management framework. They are structural. Article 13 requires that high-risk AI systems be designed for interpretability from the outset, and Article 11 requires that the technical basis for that interpretability be documented before the system reaches the market. The obligation to produce instructions for use, to pass them on to deployers, and to disclose AI-to-human interactions to end users forms a chain of accountability that supervisors will test from both ends.
Key takeaways
- Article 13(1) makes transparency a design requirement, not a documentation add-on. Providers must build high-risk AI systems so that deployers can interpret and act on outputs correctly.
- Article 13(3) lists seven minimum items that instructions for use must contain, including accuracy and robustness parameters, error conditions, human oversight measures, and a hook into the Article 27 fundamental rights impact assessment.
- Article 11 and Article 13 address different audiences: technical documentation is prepared for supervisors and notified bodies; instructions for use are addressed to deployers in the supply chain.
- Article 50 creates a separate transparency obligation for providers and deployers whose systems interact directly with natural persons. Chatbot users must be told they are talking to AI unless that is obvious.
- Penalty exposure for transparency breaches reaches EUR 7.5 million or 1.5 per cent of global turnover at the provider level; deployers who ignore instructions for use face second-tier fines of up to EUR 15 million or 3 per cent.
What Article 13 actually requires from high-risk AI providers
Article 13(1) of Regulation (EU) 2024/1689 states that high-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. The obligation is phrased as a design requirement. A provider cannot satisfy it by writing a clear manual for an opaque system. The interpretability must be embedded in how the system operates.
In practice, this means that a provider building a high-risk AI system must consider, during the design and development phase, whether the outputs of that system are sufficiently self-explanatory for a professional deployer to act on them correctly. A credit scoring model that produces a score without any indication of the factors that drove it, or a recruitment screening tool that ranks candidates without any output that the deployer can interrogate, does not meet the standard in Article 13(1) unless the instructions for use supply that interpretive layer by another route.
Article 13(2) adds that high-risk AI systems shall be accompanied by instructions for use in an appropriate digital format. The instructions must be concise, complete, correct, and clear. They must be relevant, accessible, and comprehensible to deployers. The standard is demanding. Instructions that are written at a level of technical detail accessible only to AI specialists are not comprehensible to a typical deployer in the sense that Article 13(2) requires.
The application date for these obligations, along with the rest of the Chapter III high-risk AI regime, is 2 August 2026.
The six mandatory elements in the instructions for use
Article 13(3) sets a minimum content list for the instructions for use. The list is not exhaustive, but every item on it must be present. Read alongside Article 11, which governs the technical documentation that providers must prepare for supervisors, the two provisions together define what information must exist and to whom it must be disclosed.
Identity and contact details of the provider
Article 13(3)(a) requires the instructions to identify the provider by name and to give a contact address. This is the entry point for deployers who need to exercise any right under the Act, including the right to request information, to report a serious incident, or to notify the provider of a use case the provider should classify under Article 25. A generic support URL does not discharge this duty. A named legal entity and a deliverable contact address are required.
Characteristics, capabilities, and limitations
Article 13(3)(b) requires disclosure of the characteristics, capabilities, and limitations of the AI system. This covers the system's purpose, the population it was trained or validated for, the conditions under which its outputs are reliable, and the conditions under which they are not. The intended purpose is significant because deployers are bound under Article 26(1) to use the system within its instructions. If the intended purpose is defined narrowly, a deployer who extends the system to adjacent use cases is outside the instructions and carries the consequences.
Expected accuracy, robustness, and cybersecurity
Article 13(3)(c) requires the instructions to specify the expected level of accuracy, robustness, and cybersecurity of the AI system, including any known or foreseeable circumstances that might lead to a reduction in those metrics. For a classification system, this means stating the false positive and false negative rates under the conditions tested. For an autonomous agent, it means describing the failure modes under which the agent may act incorrectly, fail to act, or be manipulated by adversarial inputs.
Error conditions and unexpected performance
Article 13(3)(d) requires disclosure of the circumstances under which the AI system may produce errors, may be misused, or may not perform as expected. This is the risk-side counterpart to the capability disclosure under Article 13(3)(b). A provider who withholds known failure modes from deployers, or who describes them in terms too general to support action by the deployer, is not complying with Article 13(3)(d). National supervisors have indicated that they will treat an unexplained performance failure as prima facie evidence that the instructions for use were deficient.
Human oversight measures
Article 13(3)(e) requires the instructions to describe the human oversight measures under Article 14, including the technical measures that enable oversight and the steps necessary for natural persons designated to exercise that oversight to perform their role effectively. This item connects Article 13 directly to the oversight architecture required under Article 14. A provider who designs in oversight mechanisms must also explain them in the instructions. The deployer cannot assign competent oversight persons under Article 26(2) if they do not know what oversight is technically possible.
Input data requirements
Article 13(3)(f) requires specification of the input data that the system needs to function as described, including where relevant the type, format, and quality of data required. This item is particularly significant for deployers who will connect their own data infrastructure to the system. Under Article 26(4), deployers are accountable for the relevance and representativeness of input data to the extent they control it. They cannot meet that obligation if the provider has not told them what the system expects.
The fundamental rights impact assessment hook
Article 13(3)(g) requires, where Article 27 applies, the instructions to contain information sufficient to enable the deployer to carry out a fundamental rights impact assessment. Article 27 applies to public bodies, private operators providing public services, and certain private deployers using systems in credit scoring, life insurance pricing, and related domains. The instructions for use must therefore signal, where applicable, that an Article 27 assessment is required and provide enough information about the system's design and known risks to support that assessment.
What deployers receive and what they must pass on
The relationship between provider and deployer in the transparency chain is asymmetric. Providers hold the design knowledge. Deployers hold the deployment knowledge. The Act attempts to bridge that asymmetry by requiring providers to transfer enough information to enable deployers to use the system within its limits and to exercise effective oversight.
Under Article 26(1), deployers are required to take appropriate technical and organisational measures to use the high-risk AI system in accordance with the instructions for use. This converts the instructions received from the provider into a compliance obligation on the deployer. A deployer who does not read the instructions is already non-compliant. A deployer who reads them and departs from them without justification is in a worse position.
Where a deployer operates within a supply chain that includes further downstream parties, for example a financial institution that integrates a third-party credit model into its own customer-facing service, the question of what information must be passed further down the chain is governed by the Act's provisions on importers and distributors, read alongside the general deployer duties. The short answer is that the deployer cannot dilute the transparency obligations owed to persons affected by the system's outputs by interposing a sub-layer of the supply chain between those persons and the information they are entitled to receive.
Article 50: Transparency for general-purpose AI interactions with the public
Article 13 addresses transparency within the supply chain, between providers and deployers of high-risk systems. Article 50 addresses transparency at the point of public contact, between AI systems and the natural persons who interact with them.
Article 50(1) requires providers of AI systems intended to interact directly with natural persons to design and develop those systems so that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. The obligation runs to the provider at design time, and to the deployer at deployment time. A deployer who installs a chatbot and removes or suppresses the AI disclosure is in breach of Article 50(1).
Article 50(2) addresses AI-generated content. Providers of AI systems that generate synthetic audio, image, video, or text content must ensure the output is marked in a machine-readable format and, where technically feasible, detectable as artificially generated or manipulated. The obligation applies even where the content is not labelled to the end user, because the machine-readable marking is intended for detection tools rather than for direct consumption.
Article 50(3) adds a specific requirement for deep fakes: deployers who use AI to generate or manipulate images, audio, or video resembling real persons must disclose that the content was AI-generated. Exceptions exist for lawful purposes such as journalism, satire, and artistic expression, subject to appropriate disclosure measures.
Article 50(4) requires deployers of AI systems generating text on matters of public interest, for example news content or political commentary, to disclose that the text was produced by AI. This provision has particular relevance for organisations using AI to publish regulatory analysis, financial commentary, or political content at scale.
The enforcement window for Article 50 obligations aligns with the general GPAI provisions: Article 50 entered into application on 2 August 2025 for most of its obligations.
How transparency obligations differ across providers, deployers, and users
The Act distributes transparency obligations across three layers of the supply chain. The layers are distinct and the obligations do not fully overlap.
Providers bear the primary technical obligation. Under Article 13 read with Article 11, they must design for interpretability, prepare comprehensive technical documentation for supervisors, and produce instructions for use that meet the minimum content standard under Article 13(3). Where the system changes materially after it enters the market, Article 22 requires providers to update the technical documentation accordingly.
Deployers bear the operational obligation. Under Article 26(1), they must use the system within its instructions. Under Article 13(3)(e), they receive the human oversight architecture that they are required to staff under Article 26(2). Under Article 50(1), they carry a co-obligation to inform users that they are interacting with an AI system where the provider has not already done so by design.
Users, in the sense of natural persons affected by a high-risk system, are not active bearers of transparency obligations, but they are the ultimate beneficiaries of the regime. Article 86 gives natural persons who are subject to a decision by a high-risk AI system the right to obtain an explanation of the role of the system in that decision. This right to explanation is a downstream effect of the transparency architecture that runs from provider design to deployer documentation.
Practical consequences: what a compliant system must ship with
A high-risk AI system placed on the EU market after 2 August 2026 must come with a documentation package that allows a competent deployer to do six things without contacting the provider: understand what the system does and does not do; identify the failure conditions; assign named oversight persons and train them; decide whether to carry out a fundamental rights impact assessment; configure the input data pipeline; and report a serious incident to the correct authority.
From a practical standpoint, this means a compliant system ships with, at a minimum, the following elements: a cover sheet identifying the provider, the system version, and a contact address; a characteristics and limitations document covering the intended purpose, population scope, accuracy metrics, and error conditions; an oversight guide describing the technical oversight measures and the steps required to exercise them; an input data specification; a flag where Article 27 applies; and a serious incident reporting procedure.
The technical documentation package required under Article 11, prepared for supervisors and notified bodies, is a separate instrument. Annex IV to Regulation (EU) 2024/1689 sets out its required contents in detail. It includes the general description of the system, the development process, the training and validation methodology, the post-market monitoring plan, and the conformity assessment records. The instructions for use in Article 13 draw from the same underlying knowledge base but are shorter, deployer-oriented, and do not include proprietary development details.
The enforcement angle: how supervisors use Article 13 in investigations
National market surveillance authorities designated under Article 74 will exercise powers of investigation under Articles 75 to 78. These include the right to request access to documentation, to conduct unannounced inspections, and to require providers and deployers to supply information within a specified period.
In a supervision inquiry focused on a high-risk AI system, Article 13 documentation is among the first materials requested, because it tells the supervisor two things at once: what the provider said the system would do, and whether the deployer received enough information to comply with Article 26. A gap in the instructions for use is simultaneously a provider breach and a possible explanation for deployer non-compliance. Supervisors have an incentive to examine both ends of the transparency chain.
Article 99(3) sets the fine ceiling for providers who supply incorrect, incomplete, or misleading information to authorities at EUR 7.5 million or 1.5 per cent of worldwide annual turnover, whichever is higher. This tier applies specifically to information failures, not to substantive system deficiencies. A provider who makes a complete but inaccurate disclosure faces the same tier as one who makes no disclosure, because in both cases the supervisor receives information that does not reflect the system's actual operation.
For deployers, the relevant penalty path runs through Article 26(1). A deployer who cannot demonstrate use within the instructions for use faces second-tier fines under Article 99 of up to EUR 15 million or 3 per cent of worldwide annual turnover. In an enforcement scenario, the deployer's first line of defence is to produce the instructions received from the provider and show that their deployment was consistent with those instructions. A deployer who does not hold the instructions is in a materially weaker position, regardless of how they actually used the system.
Supervisors are also expected to coordinate with the AI Office established under Article 64 for matters involving general-purpose AI models with systemic risk, and with national data protection authorities for high-risk systems that process personal data. The multi-authority character of AI Act enforcement increases the probability that a transparency failure in one area will surface in an inquiry that originated somewhere else.
Related reading
For the full operator obligations regime, see the EU AI Act operator obligations compliance guide. For the liability framework that runs in parallel to these transparency duties, see the three gaps in AI agent underwriting. For the certification standard that maps to transparency and documentation readiness, see Agent Certified EU. For the documentation architecture used across the publication's compliance analysis, see documenting AI agent risk management for compliance.
Frequently asked questions
What does Article 13 of the EU AI Act require?
Article 13(1) of Regulation (EU) 2024/1689 requires high-risk AI systems to be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. Article 13(2) adds that the system must be accompanied by instructions for use in an appropriate digital format.
What must the instructions for use include under Article 13(3)?
Article 13(3) specifies a minimum content list: identity and contact details of the provider; characteristics, capabilities, and limitations; expected accuracy, robustness, and cybersecurity; circumstances under which the system may produce errors; human oversight measures; input data requirements; and information enabling the deployer to carry out a fundamental rights impact assessment under Article 27 where applicable.
How does Article 13 relate to Article 11 on technical documentation?
Article 11 requires the provider to draw up comprehensive technical documentation before placing the system on the market. Article 13 concerns the subset of that information that must be passed on to the deployer in the form of instructions for use. Technical documentation is prepared for supervisors and notified bodies; instructions for use are addressed to the deployer. Both are mandatory, but they serve different audiences.
What transparency obligations apply under Article 50 to chatbots and general-purpose AI?
Article 50 imposes a separate transparency duty on providers and deployers of AI systems that interact directly with natural persons, including chatbots. Users must be informed that they are interacting with an AI system, unless this is obvious from context. Article 50 also requires that AI-generated content be detectable as such, with obligations on both the provider side (technical measures) and the deployer side (user notification).
What penalties apply for breaching Article 13 transparency obligations?
Providers who supply incorrect, incomplete, or misleading information to authorities face fines up to EUR 7.5 million or 1.5 per cent of worldwide annual turnover under Article 99(3). Deployers who fail to use a system within its instructions for use, as required by Article 26(1), face second-tier fines up to EUR 15 million or 3 per cent of worldwide annual turnover. Enforcement of high-risk AI provisions begins 2 August 2026.
References
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 12.7.2024.
- Article 11, Regulation (EU) 2024/1689, technical documentation requirements for high-risk AI systems, read with Annex IV.
- Article 13(1), Regulation (EU) 2024/1689, transparency and provision of information to deployers.
- Article 13(2), Regulation (EU) 2024/1689, instructions for use in appropriate digital format.
- Article 13(3), Regulation (EU) 2024/1689, minimum content of instructions for use.
- Article 14, Regulation (EU) 2024/1689, human oversight requirements for high-risk AI systems.
- Article 22, Regulation (EU) 2024/1689, obligations of providers where systems undergo substantial modification.
- Article 25, Regulation (EU) 2024/1689, responsibilities along the AI value chain.
- Article 26(1), Regulation (EU) 2024/1689, obligation of deployers to use systems within the instructions for use.
- Article 26(4), Regulation (EU) 2024/1689, deployer obligation regarding input data relevance.
- Article 27, Regulation (EU) 2024/1689, fundamental rights impact assessment for deployers of certain high-risk AI systems.
- Article 50, Regulation (EU) 2024/1689, transparency obligations for providers and deployers of certain AI systems.
- Article 64, Regulation (EU) 2024/1689, establishment of the AI Office.
- Article 74, Regulation (EU) 2024/1689, designation of national competent authorities and market surveillance authorities.
- Article 86, Regulation (EU) 2024/1689, right to explanation of individual decision-making.
- Article 99(3), Regulation (EU) 2024/1689, penalties for provision of incorrect, incomplete, or misleading information.
- Annex IV, Regulation (EU) 2024/1689, technical documentation referred to in Article 11(1).