The 1985 Product Liability Directive was written before the internet existed. It covered physical goods. It treated software as an intangible outside its scope. It required claimants to prove fault in most cases involving complex products. The revised directive, Directive 2024/2853, corrects each of those limitations. For AI providers and deployers, the effect is substantial.

Key takeaways

  • Directive 2024/2853 entered into force on 8 December 2024. Member states have until 9 December 2026 to transpose it into national law.
  • The directive explicitly includes software, digital content, and digital services in its product definition. AI systems provided commercially are covered, whether on-premise or cloud-hosted.
  • Compensable damage now includes destruction or corruption of data and medically recognised psychological harm, both highly relevant to AI failure modes.
  • Article 9 rebuttable presumptions substantially lower the claimant's burden of proof where an AI system's internal logic is opaque. This is the provision that changes the litigation calculus most materially.
  • Providers whose systems are placed on the EU market before December 2026 remain subject to the old rules. Systems placed on the market after transposition face the new regime.

Why the 1985 directive was inadequate for AI

The original Product Liability Directive, 85/374/EEC, was built around a simple idea: where a defective product causes damage, the producer is liable without the claimant needing to prove negligence. That principle was sound. The implementation was not designed for products that update automatically, that operate autonomously, that cause harm through outputs rather than physical failure, or that are delivered as a service over a network.

Three specific gaps defined the old regime's inadequacy for AI. First, software was treated as intangible and therefore outside the product definition in most member state interpretations, leaving AI-caused harm to be resolved under national tort law with its fault requirements. Second, the damage types covered, personal injury and damage to private property above a EUR 500 threshold, did not include data loss or psychological harm, two categories frequently relevant to AI failures. Third, the burden of proof in practice was high. Proving that a model's output was the proximate cause of a specific loss, and that the model was defective at the time it was placed on the market, required evidence that plaintiffs rarely held and defendants rarely disclosed.

Each of these gaps is addressed directly in Directive 2024/2853.

What changes: the product definition

Article 4(1) of the new directive defines product to mean "any movable item, even if integrated in another movable item or in an immovable item, as well as electricity, digital manufacturing files and software." Recital 12 clarifies that the definition covers software in all forms: operating systems, firmware, computer programs, applications, and AI systems, regardless of whether they are delivered as standalone products or embedded in a physical device or service.

The commercial threshold is the relevant qualifier. Purely free and open-source software distributed without any commercialisation falls outside the scope. Software supplied in exchange for personal data, software supplied as part of a commercial service relationship, and software embedded in a product sold on the market is inside it. For most enterprise AI deployments, SaaS AI services, and API-based AI products, this means the directive applies.

The directive also covers "related services" where those services have a direct connection to a product and affect its safety. An AI model provider that offers continued fine-tuning, monitoring, or update services as part of a commercial arrangement may see those services drawn into the product liability analysis if they affect the system's outputs.

What changes: the damage types

Article 4(6) defines damage to include four categories: death or personal injury; damage to, or destruction of, property, subject to a EUR 1,000 lower threshold; destruction or corruption of data or digital files that are not used exclusively for professional purposes; and medically recognised psychological harm.

The first two categories existed in the 1985 directive. The last two are new and significant for AI. An AI agent that corrupts a personal financial record, destroys a user's documents, or overwrites data through faulty tool execution is now causing compensable damage under EU product liability law. An AI system that produces outputs causing documented psychological distress, whether through harmful content, harassment facilitation, or false information about a person, has caused damage within the scope of the directive.

The EUR 1,000 property damage threshold and the exclusion of data used exclusively for professional purposes are design choices that limit mass litigation risk for minor incidents. They do not cap liability in serious cases, and they do not apply to personal injury or psychological harm claims.

The defect standard

Article 6 preserves the core principle of the original directive: a product is defective when it does not provide the safety that persons are generally entitled to expect. The new directive refines how that expectation is assessed for software and AI. Among the factors courts must consider are: the reasonably foreseeable use of the product, including use that deviates from the instructions; the ability of the product to continue to provide the safety expected throughout its lifecycle; the effect of updates, algorithm changes, and machine learning modifications on the product's safety over time; and the product's cybersecurity protections.

The lifecycle provision is particularly relevant for AI. A large language model that was safe at release but has drifted in its outputs due to distribution shift, or a reinforcement-learning agent whose behaviour has changed through continued training in deployment, may be defective at the time of damage even though it passed all safety assessments when first placed on the market. Providers who deploy continuously learning systems should assess their lifecycle monitoring obligations carefully.

The rebuttable presumptions: Article 9

Article 9 is the provision that changes the litigation economics most materially. It addresses the "excessive difficulty" problem that courts and claimants face when dealing with complex technical products, including AI systems whose internal operation is opaque.

Where the claimant faces excessive difficulty in establishing the defectiveness of the product, the court may apply a rebuttable presumption of defectiveness if the claimant demonstrates that the product contributed to the damage and that the product's defectiveness is plausible in light of the available evidence. In practice, for an AI system, showing that the system produced an output that caused harm, and that the harm is the kind that a defective AI could plausibly cause, may be sufficient to shift the burden of proof to the defendant to demonstrate that no defect existed.

Article 9 also provides that where there is no apparent cause for the damage other than the product's defectiveness, a court may presume defectiveness. The article addresses causation separately: where it is excessively difficult to prove a causal link between a product's defect and the damage, the court may presume causation where the defect appears capable of causing the type of harm suffered.

These are rebuttable presumptions, not irrebuttable ones. A defendant who can show how the system works, that it was designed and tested to a recognised standard, and that the claimant's harm had a different cause, can defeat them. The practical incentive this creates for AI providers and deployers is to maintain thorough technical documentation, testing records, and post-market monitoring data. The documentation file that Article 9 makes strategically valuable is the same file that Article 26 of the EU AI Act requires operators to hold.

The disclosure mechanism: Article 10

Article 10 gives courts the power to order defendants to disclose relevant evidence within their control where the claimant has presented facts sufficient to make a product liability claim plausible. The evidence subject to disclosure includes the technical documentation of the product, its design specifications, test results, and post-market surveillance records. Courts must ensure proportionality and protect confidential business information and trade secrets.

For AI providers, this is a significant procedural shift. In litigation under the old directive, the internal workings of a model were often practically inaccessible to claimants. Under the new framework, a court can compel disclosure of the documentation that explains how the system was built, tested, and monitored. This gives claimants access to information that strengthens their Article 9 rebuttable presumption argument and may reveal design choices the defendant would prefer to keep private.

The strategic response for AI providers is proactive documentation. A provider who maintains a structured technical file, aligned with the documentation requirements of EU AI Act Articles 11 and Annex IV, is in a better position in Article 10 disclosure proceedings than one who does not, because the existing file is already structured for an informed audience and does not need to be reconstructed under adversarial conditions.

Who is liable

The primary liability target under Article 7 is the manufacturer of the defective product, defined as the entity that developed and placed the product on the market. In the AI context, this is the provider that trained and deployed the model or system. Where the manufacturer is established outside the EU, an authorised representative or importer established in the EU assumes the manufacturer's liability position. Any economic operator that modifies a product already placed on the market in a way that affects its safety becomes the manufacturer of the modified product for the purpose of the directive.

That final point is directly relevant to deployers who fine-tune, extend, or chain AI systems beyond their provider's intended purpose. A deployer that takes a general-purpose language model and fine-tunes it on proprietary data to perform specialised legal or medical advice functions has modified the product. If that modification affects the system's safety, the deployer may be treated as the manufacturer of the resulting system for product liability purposes.

Article 8 adds joint and several liability where multiple economic operators are each responsible for the same damage. In a multi-party AI deployment chain, this means a claimant can sue any party in the chain for the full amount of their loss, leaving those parties to sort contribution between themselves. This is a structural feature of the EU framework that has significant implications for contractual allocation of liability in AI supply agreements.

Limitation and long-stop periods

Article 13 provides a three-year limitation period from the date the claimant became aware, or should reasonably have become aware, of the damage, the defect, and the identity of the liable party. Article 14 provides a ten-year long-stop period from the date the defective product was placed on the market, which is extended to twenty-five years where the product has a long service life and the damage is latent. For AI systems that remain in production and continue to cause harm over extended periods, the twenty-five year period is a realistic ceiling rather than a theoretical one.

The relationship to the EU AI Act

The EU AI Act and Directive 2024/2853 operate in parallel. An AI provider or deployer who breaches the EU AI Act is not automatically liable under the directive, and compliance with the AI Act does not automatically mean the product is non-defective for the directive's purposes. The two instruments pursue different objectives through different mechanisms: the AI Act sets governance obligations and creates regulatory penalties; the directive creates civil liability for damage caused by defective products.

In practice, however, compliance with the EU AI Act's documentation and risk management requirements generates evidence that is directly relevant to a directive liability defence. An operator who can demonstrate that their system was assessed for risk under Article 9 of the AI Act, documented under Articles 11 and 12, and monitored under Article 72 is in a substantially stronger position against an Article 9 rebuttable presumption claim than one who cannot.

The documentation architecture described in our earlier briefing on AI agent risk management documentation is designed to serve both regimes simultaneously. The overlap between the AI Act's technical documentation requirements and the evidence that matters in a product liability action is not coincidental.

What operators should do before December 2026

The transposition deadline of 9 December 2026 is eight months away. The following actions are appropriate for most AI providers and deployers operating in the EU.

First, map the AI systems in scope. Not every AI system meets the commercial threshold or the product definition. The exercise begins with identifying which systems are supplied commercially, which are embedded in products placed on the EU market, and which are delivered as related services to covered products. Systems outside scope can be set aside. Systems inside scope need the steps below.

Second, review the defect standard against current outputs. Article 6's lifecycle provision means that a system's safety needs to be assessed not just at release but at each point during its operational life. Providers of continuously updating models should establish a cadence for reassessing whether the model continues to meet the safety standard users are entitled to expect.

Third, review the damage types against the system's failure modes. An AI agent that can take tool actions, write to files, send emails, or make payments on behalf of users carries data corruption and psychological harm exposure that a purely read-only system does not. The exposure map should be built against the directive's damage categories, not just the risk categories used internally.

Fourth, review supply chain contracts. The joint and several liability provision and the manufacturer-equivalent liability for modifiers mean that the liability allocation between providers and deployers needs explicit treatment in commercial agreements. An indemnity from a provider does not eliminate a deployer's liability to a third-party claimant, but it may provide recourse in the right direction once liability is established.

Fifth, consider insurance. The directive's extension to data loss and psychological harm, combined with the Article 9 rebuttable presumptions, will expand the volume and character of claims brought against AI providers and deployers. Standard cyber and professional indemnity products are being revised to exclude or limit AI activity exposure, as documented by Agent Insured. A product liability policy written specifically for AI systems is the correct instrument, and the market for that product is forming now.

For the framework and scoring model that European AI providers are using to assess their certification posture before the December deadline, see Agent Certified's full methodology.

Related reading

For the parallel operator regime under the EU AI Act, including the Article 26 duties and the minimum documentation file, see our compliance guide to EU AI Act operator obligations. For the three structural liability gaps across the EU framework, see the liability framework. For the chain of liability across provider, integrator and deployer, see AI liability chains and the provider-deployer split.

Frequently asked questions

Does Directive 2024/2853 apply to AI software and SaaS services?

Yes. The directive explicitly extends the definition of product to include software, digital content and digital services. Cloud-hosted AI systems, large language model APIs, and SaaS products embedding AI functionality are covered where supplied commercially. Purely free and open-source software distributed without commercialisation falls outside scope, but software supplied in exchange for personal data or in a commercial context is inside it.

What types of damage does the new directive cover that the old one did not?

The original directive covered personal injury, death, and damage to private property. Directive 2024/2853 expands compensable damage to include destruction or corruption of data or digital files and medically recognised psychological harm. Both categories are directly relevant to AI failure modes including corrupted records, exposed personal data, and harmful AI-generated content.

What are the rebuttable presumptions in Article 9 of the new directive?

Article 9 provides that where a claimant faces excessive difficulty in proving the defectiveness of a product or the causal link between defect and damage, a court may find in the claimant's favour on either or both points unless the defendant rebuts the presumption. For AI systems where internal logic is opaque, these presumptions substantially lower the burden of proof a claimant must meet. Defendants rebut them through technical documentation and monitoring records.

Who is liable under Directive 2024/2853 for damage caused by an AI system?

The primary target is the manufacturer of the defective product, which in the AI context means the provider that developed and placed the system on the market. Where the manufacturer is outside the EU, an authorised representative or importer assumes the liability position. Deployers that modify a system beyond its intended purpose may acquire manufacturer-equivalent exposure. Article 8 provides joint and several liability where multiple parties each contributed to the damage.

When does Directive 2024/2853 apply?

The directive entered into force on 8 December 2024. Member states must transpose it into national law by 9 December 2026. The old directive, 85/374/EEC, is repealed upon transposition. Products placed on the market before the transposition deadline remain subject to the old rules.

References

  1. Directive 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products and repealing Council Directive 85/374/EEC, OJ L, 18 November 2024.
  2. Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7 August 1985.
  3. Directive 2024/2853, Article 4(1), definition of product, including software.
  4. Directive 2024/2853, Article 4(6), definition of damage, including data corruption and psychological harm.
  5. Directive 2024/2853, Article 6, defect standard and lifecycle considerations.
  6. Directive 2024/2853, Article 7, liable economic operators.
  7. Directive 2024/2853, Article 8, joint and several liability.
  8. Directive 2024/2853, Article 9, rebuttable presumptions of defectiveness and causation.
  9. Directive 2024/2853, Article 10, disclosure of evidence.
  10. Directive 2024/2853, Articles 13 and 14, limitation and long-stop periods.
  11. Regulation (EU) 2024/1689 (EU AI Act), Articles 9, 11, 12, 26, and Annex IV.
  12. European Commission, Recital 12, Directive 2024/2853, confirming the inclusion of AI systems in the product definition.