From 2 August 2026, any enterprise deploying a high-risk AI system in the EU is bound by the operator obligations of Regulation (EU) 2024/1689. From 9 December 2026, any enterprise that placed a commercially supplied AI system on the EU market faces the civil liability regime of Directive (EU) 2024/2853. Neither instrument alone defines the full exposure. Together, they do. This analysis maps the interaction between them at the article level.

Key takeaways

  • The EU AI Act Article 26 operator obligations enter application on 2 August 2026. The Fundamental Rights Impact Assessment duty under Article 27 applies from the same date for qualifying deployers.
  • The revised Product Liability Directive transposition deadline is 9 December 2026. Products placed on the market after that date in each member state are subject to the new strict liability regime.
  • AI software supplied commercially is now an in-scope product under the revised PLD, by virtue of Article 4(1) and Recital 12 of Directive (EU) 2024/2853.
  • AI Act non-compliance is an express trigger condition for the rebuttable presumption of defect under Article 10 of the revised PLD. A deployer found in breach of Article 26 is materially more exposed to a concurrent civil claim.
  • Providers and deployers face concurrent but structurally different exposures: the provider carries manufacturer liability under the PLD; the deployer carries operator liability under the AI Act and potential manufacturer-equivalent liability under the PLD where it modifies the system.
  • Claims under both regimes will overlap operationally. A single AI failure event can trigger an AI Act supervisory inquiry, an Article 10 disclosure request in civil proceedings, and a PLD damages claim, all running simultaneously.
  • The insurance market is bifurcating in response: specialist carriers are writing meaningful AI product liability limits; generalist carriers are tightening through endorsements that exclude or sub-limit autonomous AI activity.

Section 1. Two regimes, one deployment.

The EU AI Act and the revised Product Liability Directive were negotiated separately, adopted separately, and are enforced by different authorities. The AI Act is a regulation enforced by national market surveillance authorities and the AI Office at the Commission. The PLD is a directive transposed into national law and enforced through civil litigation in member state courts. They have different drafting traditions, different enforcement mechanisms, and different remedial outcomes.

They share scope when an AI system is both subject to governance obligations under the AI Act and placed on the EU market commercially under the PLD product definition. For most enterprise AI deployments in 2026, both conditions are met. The system being deployed is a high-risk AI system within the meaning of Annex III of the AI Act. The system was supplied commercially and therefore meets the PLD product definition under Article 4(1). The same deployment event brings both regimes into play.

The correct frame for understanding the double exposure is not "which instrument applies" but "how do they interact when both apply." The answer has three dimensions: the activation calendar, the scope overlap, and the evidentiary bridge.

Section 2. The AI Act activation calendar.

Regulation (EU) 2024/1689 entered into force on 1 August 2024. It applies in stages. The first stage, covering prohibited practices under Chapter II, entered into application on 2 February 2025. General-purpose AI model obligations under Chapter V entered into application on 2 August 2025. The stage that matters for deployers of high-risk AI systems is the entry into application of Chapter III, which occurs on 2 August 2026.

On 2 August 2026, three categories of obligation become binding on deployers.

Article 26(1) requires deployers to use high-risk AI systems in accordance with the instructions for use provided by the provider, and to implement the technical and organisational measures needed to ensure compliance with those instructions. This is not a passive obligation. A deployer who does not know what the instructions for use say, or who operates outside them, is in breach on day one.

Article 26(2) requires deployers to assign human oversight to a natural person with the competence and authority to understand the system's outputs, to override them where necessary, and to halt the system where required. The oversight function must be staffed: naming a role is insufficient if the person filling it lacks training or access.

Article 26(5) requires deployers to report any serious incident, or any malfunctioning that constitutes a serious incident, to the provider and to the relevant market surveillance authority without undue delay. The post-market monitoring obligation is live from the same date.

Article 27, which requires qualifying deployers to conduct a Fundamental Rights Impact Assessment before first deployment, enters into application on the same date. The qualifying categories are: public bodies or entities providing public services; deployers of creditworthiness or insurance risk assessment systems; and deployers of systems whose outputs affect access to essential services. For a detailed FRIA walkthrough, see our Article 27 analysis and the 90-day FRIA countdown checklist.

The full high-risk regime, including conformity assessment obligations and the EU database registration requirements, enters into application on 9 December 2026 for most high-risk systems. Systems embedded in regulated products face a deferred date of 2 August 2027.

Article 99 sets the penalty tiers. Breaches of Article 26 operator obligations fall within the second tier: fines up to EUR 15 million or 3 percent of worldwide annual turnover, whichever is higher. Supervisors are instructed to take SME economic viability into account in penalty calculations under Article 99(6).

Section 3. The PLD activation calendar.

Directive (EU) 2024/2853 was adopted by the European Parliament and the Council on 23 October 2024 and published in the Official Journal on 18 November 2024. It entered into force on 8 December 2024. The transposition deadline for member states is 9 December 2026. Upon transposition by a member state, Council Directive 85/374/EEC is repealed in that state.

The new regime applies to products placed on the market or put into service after the transposition deadline in each member state. Products placed on the market before the deadline remain subject to the old rules, including the old directive's narrower product definition and higher claimant burden of proof.

The alignment of the PLD transposition deadline with the AI Act's full high-risk regime entry into application, both landing on 9 December 2026, is not incidental. The European legislature explicitly framed the two instruments as complementary parts of a coherent European AI liability framework. The European Commission's 2022 impact assessment accompanying the original AI Liability Directive proposal acknowledged that the revised PLD and the AI-specific liability rules were designed to close the same set of gaps together.

Member state transposition progress as of April 2026 is uneven. No member state has enacted final transposing legislation. Several, including Germany and France, have begun internal consultation on transposition measures, but published drafts are not yet available. Ireland and the Netherlands have published preliminary government analyses of the directive's scope. The December 2026 deadline is binding under EU law; late transposition does not delay the directive's effect, which may be directly effective in some provisions against state actors after the deadline passes even without national implementing law.

Section 4. How AI software became a product under EU law.

The 1985 Product Liability Directive was drafted in an analogue world. Article 2 defined product as all movables. Software was treated as intangible in most member state courts and therefore outside the scope of the directive. Claimants whose losses arose from defective software were left to national tort law, which typically required proof of fault, a substantially higher bar.

Article 4(1) of Directive (EU) 2024/2853 closes that gap explicitly. It defines product to mean "any movable item, even if integrated in another movable item or in an immovable item, as well as electricity, digital manufacturing files and software." The word software is in the definition without qualification. Recital 12 extends the analysis: the definition covers AI systems whether they are delivered as standalone products, embedded in physical goods, provided as cloud-hosted services, or distributed as applications over a network.

The distinction between embedded AI and standalone AI has no effect on in-scope status under the new directive. An AI model embedded in a medical device is a product. A standalone large language model API is a product. A cloud-hosted AI agent with tool access is a product. What determines scope is commercialisation, not delivery format.

The AI-as-a-service framing does create one operational complexity. Article 4(4) of the directive addresses related services that are provided as part of a commercial arrangement and have a direct connection to a product's safety. An AI provider who offers continued fine-tuning, monitoring, or model update services as part of a commercial contract may see those services drawn into the product liability analysis if they affect the system's outputs. A deployer who contractually receives model updates without the ability to test or reject them has a stronger argument that the provider, not the deployer, is responsible for any defect introduced by an update.

Section 5. The rebuttable presumption of defect.

The provision that changes the litigation calculus most materially is Article 10 of Directive (EU) 2024/2853. It addresses a structural problem with AI liability litigation: the claimant often cannot access the technical evidence needed to prove that the AI system was defective, because that evidence is held by the defendant.

Article 10(2) provides that defectiveness shall be presumed where any of five conditions is met. The five conditions are: the defendant has failed to comply with the disclosure obligation in Article 9; the claimant has demonstrated that the product does not comply with mandatory product safety requirements; the damage has been caused by an obvious malfunction of the product during normal use or storage; the product was recalled for safety reasons; and the claimant demonstrates non-compliance with applicable legal standards designed to protect against the type of damage suffered.

The second and fifth conditions are the ones that create the direct link to AI Act compliance. "Mandatory product safety requirements" in EU law include the obligations imposed by the AI Act on providers and deployers of high-risk AI systems. A claimant who demonstrates, through a supervisory decision or through documentary evidence, that a deployer was not in compliance with Article 26 of the AI Act at the time an incident occurred has met the condition for a defectiveness presumption under Article 10(2) of the PLD.

Article 10(3) addresses the causal link separately. Where a claimant faces excessive difficulty in proving that a product defect caused the damage suffered, the causal link shall be presumed if the defect appears capable of causing the type of damage suffered, having regard to the circumstances of the case.

The presumptions are rebuttable, not irrebuttable. A defendant who can demonstrate through technical documentation how the system operates, what testing it underwent, that it met the applicable standards at the time it was placed on the market, and that the claimant's harm had a different cause, can defeat the presumption. The practical effect is not that defendants automatically lose. It is that defendants without documentation automatically face a shifted burden they cannot discharge.

Article 9 of the directive provides the complementary disclosure mechanism: where a claimant has presented facts and evidence sufficient to make a product liability claim plausible, the court may order the defendant to disclose relevant evidence within the defendant's control. The evidence subject to disclosure includes technical documentation, design specifications, test results, and post-market surveillance records. The strategic inference for AI providers is direct: a structured, well-maintained technical file is the evidentiary asset that PLD litigation will be fought over.

Section 6. Expanded damage categories.

Article 4(6) of Directive (EU) 2024/2853 defines compensable damage in four categories. The first two, death or personal injury and damage to or destruction of private property above a EUR 1,000 threshold, existed in the old directive. The last two are new.

The third category is destruction or corruption of data or digital files not used exclusively for professional purposes. An AI agent that corrupts a user's personal financial records, destroys stored documents through faulty tool execution, or overwrites user data through an erroneous write action has caused compensable damage under the new directive. The exclusion of data used exclusively for professional purposes limits the category's application in purely enterprise B2B contexts, but consumer-facing and mixed-use AI deployments carry full exposure.

The fourth category is medically recognised psychological harm. This category is directly relevant to AI systems that produce outputs causing documented distress: a model that generates harmful content, produces false information about an identifiable person, or facilitates harassment through autonomous outputs may have caused psychological harm within the scope of the directive. The requirement for medical recognition means that self-reported distress without clinical assessment is unlikely to found a claim. Documented clinical harm is the threshold.

The damage categories should be read against the AI system's failure modes, not against the categories used in internal risk assessments. An agentic AI system with tool access that can send emails, execute payments, write files, and make calls to external APIs has a data corruption exposure that a read-only language model does not. The mapping exercise between failure modes and compensable damage categories is the first step in a PLD exposure analysis.

The EUR 1,000 property damage threshold applies only to the property damage category. It does not cap personal injury, data loss, or psychological harm claims. It is a design choice that limits small-claims mass litigation for minor property incidents, not a ceiling on serious losses.

Section 7. The provider versus deployer split.

The AI Act and the PLD each divide liability between parties in the supply chain, but they do so along different axes. The table below maps the primary duty-bearer at each point of exposure.

Obligation or exposure Instrument Primary duty-bearer Secondary / concurrent exposure
Conformity assessment (Annex VI or VII) AI Act Provider Deployer if it has become the provider under Article 25
Technical documentation (Article 11, Annex IV) AI Act Provider Deployer must receive and hold instructions for use
Instructions for use compliance (Article 26(1)) AI Act Deployer None; deployer bears this obligation exclusively
Human oversight (Article 26(2)) AI Act Deployer Provider must design for overridability under Article 14
Serious incident reporting (Article 26(5)) AI Act Deployer (reports to provider and authority) Provider must report to national authority within 15 days
Fundamental Rights Impact Assessment (Article 27) AI Act Deployer (qualifying categories) None
Manufacturer liability for defective product (Article 7) PLD Provider (as manufacturer) Deployer if it modifies the system or markets it under own name
Joint and several liability (Article 8) PLD Any economic operator who contributed to damage Full recovery available from any party in chain
Rebuttable presumption of defect (Article 10) PLD Provider rebuttal burden (defect) Deployer rebuttal burden (AI Act compliance trigger)
Disclosure of technical documentation (Article 9) PLD Provider (holds technical file) Deployer (holds operational records and monitoring data)

The critical observation from the table is that the regimes converge on the deployer at the Article 10 rebuttable presumption trigger. If the deployer is in breach of Article 26 of the AI Act, a claimant may use that breach to invoke the PLD defectiveness presumption. The deployer's non-compliance with a regulatory obligation becomes the foundation for a private law damages claim. This is the interaction point that most commentary has failed to trace through to its operational consequence.

Section 8. A scenario walkthrough.

The following scenario is constructed to illustrate how both regimes operate simultaneously on a single AI failure event. The facts are representative of a class of deployment that is common in 2026.

The deployment. A regulated lender operating across three EU member states deploys an autonomous credit-scoring AI agent. The agent ingests applicant financial data, accesses external credit bureau APIs as a tool, generates a credit score, and automatically issues a lending decision without mandatory human review for applications below a set credit threshold. The agent is a high-risk AI system under Annex III, point 5(b) of the AI Act, which covers systems used to evaluate the creditworthiness of natural persons. The provider is a US-headquartered fintech that markets the agent under a commercial SaaS contract and has designated an EU authorised representative.

The incident. The agent misclassifies an applicant due to a data processing error in the credit bureau API integration, issuing an automatic rejection. The applicant, a self-employed individual in the Netherlands, is refused a business loan that was critical to a commercial opportunity with a six-week window. The opportunity expires before a manual review can be completed. The applicant suffers documented financial loss of EUR 47,000. A psychiatrist subsequently documents that the erroneous rejection triggered a clinically significant depressive episode lasting three months.

The AI Act dimension. The national market surveillance authority in the Netherlands receives a report from the applicant. An investigation establishes that the deployer had not implemented the human oversight requirement of Article 26(2) for applications below the threshold level. The deployer's oversight register named a compliance officer but that officer had not received training on the system's outputs and had no technical access to override decisions. A supervisory finding of Article 26(2) non-compliance is issued. The deployer faces a fine in the EUR 8 million range under Article 99.

The PLD dimension. The applicant initiates a civil claim in a Dutch court against both the US fintech provider (through its EU authorised representative) and the lender deployer, seeking EUR 47,000 in financial loss and damages for psychological harm. The applicant's lawyers point to the supervisory finding of Article 26(2) non-compliance as meeting the Article 10(2) trigger condition: the deployer has failed to comply with a mandatory product safety requirement. The Dutch court holds that the defectiveness presumption is engaged as against the deployer. The deployer must now rebut it by demonstrating either that the system was not defective or that the harm had a different cause.

Separately, the claimant applies under Article 9 for disclosure of the provider's technical file, including the credit bureau API integration documentation and the test results for the threshold-level decision pathway. The court orders disclosure, subject to proportionality review and redaction of unrelated trade secrets. The disclosed documents reveal that the API integration was not independently tested by the provider for the input data formats used in the deployer's specific market.

The insurance dimension. The deployer's cyber policy contains an AI activity endorsement excluding autonomous financial decision-making from coverage. The deployer's professional indemnity policy excludes product liability claims by endorsement added at the 2025 renewal. The deployer is uninsured for the PLD claim. The deployer's contractual indemnity from the provider covers only claims arising from defects in the base model, not defects arising from the API integration with the deployer's credit bureau configuration. The deployer pays the claim and the fine from its own balance sheet.

This scenario is not a prediction of a specific case. It is a structural illustration of how the two regimes interact in a realistic deployment context. The deployer's Article 26(2) breach becomes the PLD Article 10(2) trigger. The supervisory proceeding and the civil claim run concurrently, each using overlapping evidence. The insurance gaps compound the financial exposure. The contractual indemnity does not cover the ground the deployer assumed it did.

Section 9. Insurance market response.

The insurance market's response to the double exposure is bifurcating along a specialist-generalist axis. The two categories are moving in opposite directions.

Specialist carriers who entered the AI liability market early are actively writing meaningful limits across the scope of both regimes. Munich Re's aiSure product is the most cited example in the European reinsurance market, offering coverage structures that explicitly contemplate AI Act regulatory exposure and PLD civil liability as concurrent risks. HSB (Hartford Steam Boiler), operating through its specialty technology division, is writing AI product liability with coverage that addresses data corruption claims, reflecting the PLD's expanded damage categories. Armilla, operating in a Chaucer coverholder arrangement at Lloyd's, has built underwriting criteria around AI governance documentation, meaning that an operator with a structured Article 26 file and a conformity assessment record receives meaningfully better terms than one without.

AIUC, the AI Underwriting Consortium whose AIUC-1 Standard was published in July 2025, is referenced by a growing number of specialist carriers as the underwriting framework against which AI system compliance is assessed. Testudo and Counterpart are writing directors-and-officers adjacent AI liability, covering regulatory enforcement exposure under the AI Act alongside civil claims. Corgi is active in the SME segment with modular AI liability products that address both regulatory and civil exposure at smaller premium points.

Generalist carriers are moving in the opposite direction. AIG, W.R. Berkley, and Great American have each tightened their treatment of AI activity exposure through endorsement revisions at 2025 and 2026 renewals, adding exclusions or sublimits for autonomous AI decision-making, for claims arising from AI-generated outputs, and for losses arising from AI systems subject to regulatory investigation. These endorsements do not require an explicit AI Act finding to trigger: the language typically refers to systems that take autonomous action on behalf of the insured without human approval, which describes a wide class of agentic deployments.

The Lloyd's coverholder layer provides the middle ground. Several Lloyd's coverholders are building hybrid products that combine a base layer of technology professional liability with an AI rider addressing PLD-style product defect claims. The coverage terms vary significantly between coverholders, and the interaction between the base layer and the AI rider in the event of a concurrent claim is not uniformly resolved in the policy language. Deployers buying through this layer should request coverage confirmation in writing for a scenario matching the structure of Section 8 of this article.

The practical implication for AI deployers in 2026 is that standard renewal of existing cyber and professional indemnity cover does not maintain the insurance position they held in 2024. The coverage has narrowed through endorsement. The exposure has broadened through the two activation calendars. The gap between them has widened at each renewal cycle since 2024.

Section 10. What to file before 9 December 2026.

The operator who waits for both instruments to be fully in application before beginning documentation assembly will be building under adversarial conditions. The documentation that rebuts the Article 10 presumption in civil litigation is the same documentation that demonstrates Article 26 compliance in a supervisory inquiry. It should be assembled once, structured for both purposes, and maintained continuously from the point of deployment.

The AI Act operator file. Five documents should exist as a minimum for any high-risk AI deployment operating from 2 August 2026:

  1. A risk record, documenting the risk classification analysis, the applicable provisions of Annex III, and the residual risks accepted at the point of deployment.
  2. An oversight register, naming the qualified human supervisor by role, documenting their training, and setting out their override authority and access.
  3. An instructions-for-use map, showing where the deployment sits relative to the provider's stated operational boundaries, and identifying any areas where the deployment approaches or tests those boundaries.
  4. A logging schedule, aligned to the Article 12 data retention requirements and specifying how logs will be stored, secured, and made available to authorities upon request.
  5. An incident protocol, specifying the procedure for identifying, classifying, and reporting serious incidents under Article 26(5), including internal escalation paths and the authority's notification channel.

The PLD evidentiary file. The following additional records are structurally important for PLD defence:

  1. Technical documentation of the system as supplied by the provider, including the conformity assessment report where one is required, and the declaration of conformity.
  2. A record of any modifications made to the system after receipt, including fine-tuning, system prompt changes at scale, tool integrations, and changes to the deployment context or user population.
  3. Post-market monitoring records, documenting the cadence of performance review, any anomalies identified, and the actions taken in response. This is the record that demonstrates the system was not defective at the time of an incident.
  4. A supply chain record, documenting the provider's contractual obligations regarding system updates, notification of safety-relevant changes, and cooperation in the event of a regulatory inquiry or civil claim.
  5. An insurance confirmation record, documenting which policies are in force, the AI activity coverage terms, and the written confirmation requested from any ambiguous coverholder product regarding the Section 8 scenario structure.

The combined operator file can be maintained in a single structured folder with a cover sheet indexing both the AI Act compliance evidence and the PLD evidentiary content. The cover sheet is the document that demonstrates, in the first pages of any supervisory inquiry or disclosure request, that the operator treated both instruments as binding simultaneously.

Section 11. Frequently asked questions.

What is the EU Product Liability Directive 2024?

Directive (EU) 2024/2853, adopted on 23 October 2024, repeals and replaces Council Directive 85/374/EEC. It extends the definition of product to include software and AI systems supplied commercially, expands compensable damage to include data loss and medically recognised psychological harm, introduces rebuttable presumptions of defect where claimants face excessive difficulty proving their case, and gives courts the power to order disclosure of technical documentation. Member states must transpose it by 9 December 2026.

When does the new Product Liability Directive apply?

The directive entered into force on 8 December 2024. The member state transposition deadline is 9 December 2026. The old directive, 85/374/EEC, is repealed upon transposition. Products placed on the market before the transposition deadline in each member state remain subject to the old rules in that state. Products placed on the market after transposition face the new regime from the date of placement.

Is AI software covered by the new Product Liability Directive?

Yes. Article 4(1) defines product to include software. Recital 12 confirms that the definition covers AI systems whether delivered as standalone products, embedded in physical devices, or provided as cloud-hosted services. The commercial threshold is the relevant qualifier: purely free and open-source software distributed without commercialisation falls outside scope. Software supplied in exchange for personal data or in a commercial context is inside it. Most enterprise and SaaS AI deployments meet this threshold.

What is the rebuttable presumption of defect under PLD Article 10?

Article 10 provides that defectiveness shall be presumed where any of five conditions is met, including non-compliance with mandatory product safety requirements and non-compliance with applicable legal standards designed to protect against the type of damage suffered. For AI deployers, this means an AI Act Article 26 breach is a direct trigger condition. The presumption shifts the burden of proof to the defendant, who must demonstrate either that the system was not defective or that the claimant's harm had a different cause. It is rebuttable through technical documentation and monitoring records.

How does the Product Liability Directive interact with the EU AI Act?

The two instruments operate in parallel. The AI Act sets governance obligations and creates regulatory penalties. The PLD creates civil liability for damage caused by defective AI products. The most significant interaction is that AI Act non-compliance is a trigger condition for the PLD rebuttable presumption of defect under Article 10. A deployer found in breach of Article 26 is therefore in a materially weaker position against a concurrent PLD civil claim. The documentation that satisfies AI Act compliance requirements also serves as the primary evidentiary asset in PLD defence.

Who is liable under the PLD: the AI provider or the deployer?

The primary liability target under Article 7 is the manufacturer, which in the AI context is the provider that developed and placed the system on the market. Where the manufacturer is outside the EU, an authorised representative or importer assumes the liability position. Deployers who modify a system beyond its intended purpose, or who market it under their own name, may acquire manufacturer-equivalent exposure. Article 8 provides joint and several liability across all economic operators who contributed to the same damage: a claimant can recover from any party in the chain.

What damages are recoverable under the new Product Liability Directive?

Article 4(6) defines compensable damage in four categories: death or personal injury; damage to or destruction of private property above a EUR 1,000 threshold; destruction or corruption of data or digital files not used exclusively for professional purposes; and medically recognised psychological harm. The last two categories are new and directly relevant to AI failure modes. An agent that corrupts user data or produces outputs causing documented clinical distress has caused compensable damage under the new regime.

Does the Product Liability Directive have extraterritorial reach?

Yes, with qualifications. Where an AI system is placed on the EU market and causes damage to a person in the EU, the directive applies regardless of where the manufacturer is established. Where the manufacturer is outside the EU, an authorised representative or importer established in the EU assumes the manufacturer's liability position under Article 7. Non-EU AI providers with no EU establishment and no EU authorised representative create a liability gap that leaves EU importers and distributors of their systems directly exposed.

What insurance covers Product Liability Directive claims for AI?

Standard cyber and professional indemnity policies are being revised to exclude or sub-limit autonomous AI activity. The correct instrument for PLD claims arising from AI defects is a product liability policy specifically written for AI systems, covering defect-based claims including data loss and psychological harm. Specialist carriers actively writing meaningful limits in 2026 include Munich Re aiSure, Armilla via Chaucer at Lloyd's, HSB, AIUC, Testudo, Counterpart, and Corgi. Generalist carriers including AIG, W.R. Berkley, and Great American are tightening through endorsements. Buyers should request written confirmation that their policy responds to a concurrent AI Act supervisory inquiry and PLD civil claim arising from the same event.

What documentation must a deployer hold under both the PLD and the EU AI Act?

Under the AI Act, a high-risk AI deployer must hold: a risk record, an oversight register, an instructions-for-use map, a logging schedule, and an incident protocol. Under the PLD, the same deployer should hold: technical documentation of the system as supplied by the provider, conformity assessment records where applicable, a record of any modifications made to the system post-supply, post-market monitoring records, and a supply chain record documenting the provider's notification and cooperation obligations. These documentation sets overlap substantially and can be maintained as a single unified operator file.

Related reading

For the foundational analysis of the revised Product Liability Directive as a standalone instrument, including the defect standard, the damage categories, and the disclosure mechanism in full, see our PLD 2024 briefing on this site.

For the full Article 26 operator obligations guide, including the five minimum documents and the human oversight design standard, see the 2026 compliance guide to EU AI Act operator obligations.

For the FRIA requirement under Article 27, which enters into application on the same date as Article 26 for qualifying deployers, see the Article 27 FRIA guide and the 90-day countdown checklist.

For the multi-party liability chain analysis, covering when a deployer becomes the provider under Article 25 and how joint and several liability operates in practice, see the provider-deployer liability chain briefing.

For a current view of which carriers are writing meaningful AI product liability limits and how coverage terms compare across the market, see the Carrier Comparison Matrix at agentinsured.eu.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, OJ L, 12 July 2024.
  2. Regulation (EU) 2024/1689, Article 26, obligations of deployers of high-risk AI systems. Effective 2 August 2026.
  3. Regulation (EU) 2024/1689, Article 27, fundamental rights impact assessment for high-risk AI systems. Effective 2 August 2026 for qualifying deployers.
  4. Regulation (EU) 2024/1689, Article 25, obligations of deployers in certain cases (deployer-to-provider reclassification).
  5. Regulation (EU) 2024/1689, Article 99, penalties. Second-tier fines up to EUR 15 million or 3 percent of worldwide annual turnover for Article 26 breaches.
  6. Regulation (EU) 2024/1689, Annex III, point 5(b), high-risk AI systems used to evaluate the creditworthiness of natural persons.
  7. Regulation (EU) 2024/1689, Article 11 and Annex IV, technical documentation requirements for high-risk AI systems.
  8. Regulation (EU) 2024/1689, Article 12, record-keeping and logging requirements.
  9. Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products and repealing Council Directive 85/374/EEC, OJ L, 18 November 2024. Entered into force 8 December 2024.
  10. Directive (EU) 2024/2853, Article 4(1), definition of product, expressly including software.
  11. Directive (EU) 2024/2853, Recital 12, confirming that AI systems in all delivery forms are included in the product definition.
  12. Directive (EU) 2024/2853, Article 4(6), definition of damage, including data corruption and medically recognised psychological harm.
  13. Directive (EU) 2024/2853, Article 7, liable economic operators, establishing the manufacturer as primary target.
  14. Directive (EU) 2024/2853, Article 8, joint and several liability of multiple economic operators.
  15. Directive (EU) 2024/2853, Article 9, disclosure of evidence and court powers to order defendant disclosure.
  16. Directive (EU) 2024/2853, Article 10, rebuttable presumption of defectiveness, including non-compliance with mandatory product safety requirements as an express trigger condition.
  17. Directive (EU) 2024/2853, Articles 13 and 14, three-year limitation period and ten-year long-stop, extended to twenty-five years for products with long service lives.
  18. Council Directive 85/374/EEC of 25 July 1985 on liability for defective products, OJ L 210, 7 August 1985. Repealed upon member state transposition of Directive (EU) 2024/2853.
  19. European Commission, Proposal for a Directive on liability for artificial intelligence, COM(2022) 496 final, 28 September 2022. The Commission's impact assessment confirmed that the revised PLD and the AI-specific liability rules were designed as complementary instruments.
  20. EIOPA, Opinion on the use of Artificial Intelligence by Insurance and Reinsurance Undertakings, EIOPA-BoS-21/612. Sets supervisory expectations for AI governance in the insurance sector that inform how EIOPA-supervised entities approach the AI Act compliance and PLD evidentiary file simultaneously.
  21. Bird & Bird LLP, "The New EU Product Liability Directive and AI: What Changes for Technology Companies," November 2024. Analysis confirming that AI software falls within the Article 4(1) product definition and that the Article 10 presumptions will shift litigation economics for AI claims.
  22. Freshfields Bruckhaus Deringer LLP, "EU AI Liability: The Interaction Between the AI Act and the Revised Product Liability Directive," December 2024. Analysis tracing the Article 10(2) trigger conditions to AI Act compliance obligations.
  23. A&O Shearman, "Product Liability for AI Systems: The Revised Directive and the AI Act Together," January 2025. Notes the convergence of the two activation calendars and the evidentiary bridge created by the AI Act documentation requirements.
  24. EUR-Lex, full text of Directive (EU) 2024/2853, available at eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024L2853.
  25. EUR-Lex, full text of Regulation (EU) 2024/1689, available at eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689.