Article 12 sits in Chapter III, Section 2 of Regulation (EU) 2024/1689 alongside the other technical obligations that define what a high-risk AI system must be capable of doing before it can be lawfully placed on the market. Of these obligations, logging is the most directly actionable for deployers. The provider must build it. The deployer must use it and retain it. The regulator may inspect it. The court may rely on it. And the insurer will ask for it.
Key takeaways
- Article 12 requires providers to build automatic logging into every high-risk AI system, with a level of traceability appropriate to the system's intended purpose. A system without this capability is not lawfully marketable in the EU.
- Article 26(5) requires deployers to retain the automatically generated logs for at least six months, or at least one year if the deployer exercises public authority or is an EU institution.
- The log record must be sufficient to enable post-hoc verification of compliance and to reconstruct what the system did at the time of any alleged failure.
- Sectors subject to separate Union or national law on record-keeping (financial services, healthcare, employment) may face longer retention obligations than the Article 26(5) minimum.
- AI insurance underwriters treat the log record as primary evidence of operational monitoring. Operators without complete logs are at material risk of claim denial or partial settlement.
What Article 12 requires from providers
Article 12(1) establishes the technical obligation: high-risk AI systems must be designed and developed with capabilities enabling the automatic recording of events throughout their lifetime. The logs must enable the monitoring of the operation of the system and allow for post-hoc verification of its compliance with the requirements of Chapter III.
The phrase "automatic recording" is significant. The logging capability must be embedded in the system itself, not bolted on through external monitoring tools that the deployer separately configures. A provider that supplies a high-risk AI system without built-in logging does not satisfy Article 12. The deployer receiving such a system cannot make it compliant by adding external instrumentation alone, because the requirement sits with the provider's design obligation.
Article 12(1) further specifies that the logging capabilities must be commensurate with the intended purpose of the system. This proportionality principle means that a biometric categorisation system will require more granular logging than an AI-assisted scheduling tool, because the compliance stakes and the potential for harm differ. The risk management system required under Article 9 informs what level of traceability is appropriate: if the risk management process identifies a particular failure mode as consequential, the logs must capture data that would allow that failure mode to be identified post-hoc.
Article 12(2) specifies that for AI systems used in biometric identification covered by Annex III point 1(a), the logging capability must include a minimum specific set of data: the start and end times of each use, the reference database used for verification or identification, the input data that produced a match or no-match determination, and identification of the persons responsible for verifying results. This is the most detailed logging specification in the article and reflects the heightened sensitivity of biometric identification in public spaces.
For all other high-risk AI systems, the minimum content of the logs is set by what Article 12(1) requires to "allow for post-hoc verification of compliance" combined with the technical documentation that Article 11 and Annex IV mandate. The technical documentation must describe the system's monitoring, testing, and validation capabilities. The logs must be consistent with what that documentation specifies.
Article 26(5): what the deployer must actually do
The deployer's obligation under Article 26(5) is direct and time-bound: keep the logs automatically generated by the high-risk AI system for at least six months, unless Union or national law provides otherwise.
Three points about this obligation deserve attention.
First, the obligation is conditional on the logs being "under the control" of the deployer. This formulation acknowledges the market reality that many high-risk AI systems are cloud-hosted services where the provider holds the logs on infrastructure they control. If the provider retains the logs and does not give the deployer access to them, the deployer cannot comply with the retention obligation. This creates a contractual imperative: before deploying a high-risk AI system, the deployer must secure contractual access to the automatically generated logs, the right to retain them for the required period, and a clear mechanism for export or download if the provider relationship ends.
Second, six months is a minimum, not a ceiling. Sectoral rules frequently require longer. Financial services firms subject to MiFID II and MiFIR must retain records of transactions and orders for five years. Healthcare operators subject to GDPR's special category data rules may face retention requirements derived from clinical governance obligations. Employment screening tools may need to retain decision-supporting data for the duration of potential challenge periods under national employment law. Deployers should map their specific regulatory context and apply the longest applicable retention period.
Third, the elevated one-year minimum for deployers exercising public authority reflects the public accountability dimension of state use of AI. Law enforcement agencies, immigration authorities, and public benefit administrators that deploy high-risk AI face the longer retention requirement automatically, without needing to identify any other applicable legal obligation. This affects a significant population of deployers across EU member states.
The evidence function of the log record
The practical significance of Article 12 and Article 26(5) goes beyond regulatory compliance. The log record is the primary evidence in any post-incident investigation, whether by a national market surveillance authority, a data protection authority, a court, or an insurer.
In regulatory enforcement proceedings, the national market surveillance authorities established under Articles 74 and 84 of Regulation (EU) 2024/1689 have the power to request access to documentation demonstrating compliance. The enforcement architecture covered in our briefing on the EU AI Act enforcement structure explains how national authorities will conduct market surveillance. The log record is the operational evidence that such surveillance will examine. A deployer with intact, complete logs covering the operational period can demonstrate how the system behaved. A deployer without logs cannot.
In civil proceedings under the revised Product Liability Directive (Directive 2024/2853), the disclosure facilitation rules in Article 9 allow courts to order defendants to produce technical documentation and logs. The presumption of defectiveness in Article 10 applies where the claimant faces disproportionate difficulty obtaining evidence. Where a deployer has destroyed, lost, or never obtained the logs that Article 12 required, that absence strengthens the claimant's position: it becomes more difficult for the deployer to rebut the presumption that the system was defective.
In insurance claims, the log record is the underwriter's window into what the system actually did. Products written using the AIUC-1 standard and Munich Re aiSure both condition coverage on evidence of operational monitoring. An operator presenting a complete log record demonstrating that anomalies were recorded and reviewed is presenting the evidence that justifies a claim payment. An operator presenting nothing is providing the underwriter with a basis for reduced payment or denial.
What to verify when procuring a high-risk AI system
Before procuring and deploying any high-risk AI system, a compliance officer or legal team should verify four things related to Article 12.
The first is confirmation from the provider, in writing, that the system includes Article 12-compliant automatic logging. This should be a specific representation in the supply contract, not a generic compliance warranty. The representation should specify what events are logged, at what granularity, and in what format.
The second is access. The deployer must have the contractual right to access and export the logs. Cloud-hosted systems where the provider controls the log infrastructure are the most common source of access failures. The contract should specify the mechanism for log delivery (API export, periodic transfer, direct access to log storage), the format in which logs are provided, and the provider's obligation to maintain access throughout the contract term and for a transition period after termination.
The third is alignment with the deployer's own retention obligations. The contract should not allow the provider to delete logs before the deployer's applicable retention period expires. For deployers in financial services or healthcare, this may require negotiating provider log retention that exceeds the Article 26(5) minimum.
The fourth is the relationship between logs and the technical documentation. The technical documentation provided under the Article 26 deployer obligations framework should describe what the logging capability records. Where the technical documentation and the actual logs are inconsistent, the deployer has a documentation failure that creates compliance risk even if the logs themselves are complete.
The logging obligation and the broader compliance architecture
Article 12 does not stand alone. It sits within the interconnected system of high-risk AI obligations in Chapter III, Section 2 of Regulation (EU) 2024/1689. The risk management system under Article 9 defines the risk profile that logging must be capable of monitoring. The data governance requirements under Article 10 specify the data management obligations that produce the input data the logs must record. The human oversight requirements under Article 14 depend on the log record to demonstrate that designated persons can meaningfully interpret and act on what the system records.
For the deployer, the practical architecture is sequential: the provider's Article 9 risk management process establishes what must be monitored; Article 12 requires that monitoring to be automatic and logged; Article 26(5) requires the deployer to retain those logs; and the deployer's own Article 26(1) obligation to implement appropriate technical and organisational measures gives effect to that retention as an operational programme rather than a contractual right.
This interconnection is relevant for the Digital Omnibus delay scenario. If the Omnibus passes and the two-year delay to high-risk obligations takes effect, the practical impact for deployers already using high-risk AI systems with Article 12-compliant logging is limited: they should continue retaining logs and monitoring their systems, because the risk profile that makes logging valuable has not changed. The regulatory obligation may be delayed. The operational and liability rationale for maintaining the log record is not.
For coverage and certification pathways in relation to your Article 12 log records, see the Agent Certified methodology for how logging and evidence retention map to the certification framework's evidence dimension, and Agent Insured's coverage framework for how the log record affects your position in the underwriting process.
Immediate action items for deployers
Deployers operating high-risk AI systems before the August or December 2026 deadlines should take the following steps in relation to Article 12 and Article 26(5).
Map every high-risk AI system in use and confirm whether each has Article 12-compliant logging built in. For any system where logging capability is unclear, send a written enquiry to the provider requesting confirmation and documentation.
Review every supply contract to confirm that the deployer has access rights to the generated logs and that the provider's data retention policies do not allow deletion before the applicable retention period expires.
Implement an internal log retention procedure specifying where logs are stored, who has access, what access controls apply, and how the retention period is tracked and enforced.
Document the log retention procedure as part of the technical and organisational measures required under Article 26(1). This documentation becomes part of the compliance file that market surveillance authorities will examine and that insurance underwriters will request.
Where sectoral obligations require longer retention than Article 26(5)'s six-month minimum, apply the longer period and record the legal basis for doing so in the compliance file.
Frequently asked questions
What does Article 12 of the EU AI Act require for high-risk AI systems?
Article 12 requires providers to design and develop high-risk AI systems with automatic logging capabilities that record events throughout the system's operational lifetime. The logs must enable post-hoc verification of compliance and must capture information sufficient to reconstruct what the system did in any given period of use. The level of traceability must be proportionate to the system's intended purpose and the risks identified in the Article 9 risk management process.
How long must deployers retain logs under Article 26(5)?
Article 26(5) requires deployers to retain automatically generated logs for at least six months. For deployers that are EU institutions or bodies, or deployers exercising public authority, the minimum is at least one year. Sectoral rules (financial services, healthcare, employment) may require longer periods. Deployers should apply the longest applicable retention period and document the legal basis in their compliance file.
What information must Article 12 logs capture?
The logs must capture information sufficient to enable monitoring and post-hoc compliance verification. For biometric identification systems, Article 12(2) specifies minimum content including start and end times of each use, reference database identity, input data, and responsible persons. For other high-risk AI systems, the content is determined by the technical documentation and the risk management process: the logs must record what is needed to identify whether the system operated within its documented parameters.
What happens if a high-risk AI system cannot generate the logs Article 12 requires?
A system without Article 12-compliant logging is not lawfully marketable in the EU. For deployers, operating such a system creates regulatory enforcement exposure and undermines civil liability defences. The deployer should treat the absence of logging as a procurement failure, seek immediate contractual remedy from the provider, and consider whether use of the non-compliant system should continue while the issue is resolved.
Why does the Article 12 log record matter for insurance purposes?
Insurance underwriters for AI products require operational monitoring evidence as part of the underwriting submission. The Article 12 log record demonstrates that the system was monitored and that anomalies were detectable. Underwriting standards including AIUC-1 and Munich Re aiSure condition coverage on evidence of this kind. Without an intact log record, operators face reduced coverage options, higher premiums, or claim denial if an incident occurs.
References
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), Article 12 (Record-keeping) and Article 26(5) (Obligations of deployers of high-risk AI systems).
- Regulation (EU) 2024/1689, Annex IV: Technical documentation referred to in Article 11(1), specifying the documentation framework within which logging capabilities must be described.
- Directive 2024/2853 of the European Parliament and of the Council on liability for defective products (revised Product Liability Directive), Article 9 (Disclosure of evidence) and Article 10 (Burden of proof).
- Regulation (EU) 2024/1689, Articles 74 and 84, establishing national market surveillance authorities and their investigation powers.
- AIUC-1 reference standard. AI Underwriting Company, 2025. Operational monitoring requirements for coverage eligibility.
- Munich Re. aiSure product documentation, 2025 to 2026 edition. Technical documentation and monitoring evidence requirements for underwriting submissions.
- ISO/IEC 42001:2023. Information technology. Artificial intelligence. Management system. Monitoring and measurement requirements relevant to logging obligations.