The typical commercial AI deployment in 2026 involves at least three distinct parties. A foundation model provider builds and trains the base model. An integrator or developer builds an application layer on top of it. A deployer puts that application in front of end users. When something goes wrong, which party bears the legal exposure?

Key takeaways

  • The EU AI Act assigns regulatory obligations by role. The provider of a high-risk AI system carries the heaviest obligations. The deployer carries a defined subset. Article 25 specifies when a deployer crosses the line and becomes the provider for regulatory purposes.
  • The revised Product Liability Directive (Directive 2024/2853) provides joint and several liability where multiple parties each contributed to the same damage. A claimant can recover from any party in the chain.
  • Fine-tuning, prompt engineering at scale, and multi-agent orchestration can each shift the regulatory boundary between integrator and provider depending on the facts. There is no safe harbour for the deployer who assumes "the provider handles compliance."
  • The liability gap sits in the multi-party chain: each party has documentation and indemnity rights against others in theory, but the claimant-facing exposure lands first on the deployer, who has the direct user relationship.
  • Contractual allocation between parties in the AI supply chain matters, but it does not affect the liability owed to third-party claimants. Both instruments operate through mandatory provisions that commercial agreements cannot displace.

The structure of the chain

A modern AI agent deployment rarely involves a single party. The typical structure is layered. At the foundation layer sits the model provider: an organisation that trained a large language model or multimodal model, typically on broad internet data, and offers it via API. At the application layer sits an integrator: a developer who builds a specific product using the model API, adds retrieval-augmented generation, tool integrations, prompt engineering, and fine-tuning on proprietary data. At the surface layer sits the deployer: an organisation that puts the finished application in front of employees, customers, or members of the public.

Each layer introduces new behaviours. A foundation model may be safe within the parameters tested by its provider. The integrator's fine-tuning may introduce new failure modes the provider never tested for. The deployer's end-use context may involve users and inputs that neither the provider nor the integrator anticipated. A failure that produces harm can have its root cause at any of the three layers, or at the interfaces between them.

How the EU AI Act assigns roles

The AI Act defines two primary roles: provider and deployer. Article 3(3) defines provider as any natural or legal person that develops an AI system or has one developed, and places it on the market or puts it into service. Article 3(4) defines deployer as any natural or legal person that uses an AI system under its authority in the course of a professional activity.

The Act assigns different obligation sets to each. Providers of high-risk AI systems must complete conformity assessment under Articles 43 to 48, maintain the technical documentation file under Article 11 and Annex IV, affix the CE marking, register in the EU database under Article 71, and conduct post-market monitoring under Article 72. Deployers must use the system within the provider's instructions, assign named human oversight under Article 14, monitor operation and report serious incidents under Article 26(5), retain logs, and inform workers before deploying in employment contexts.

The practical observation is that the provider obligations are design and testing obligations, incurred once before the system goes to market. The deployer obligations are operational obligations, incurred continuously throughout the system's deployment life. A deployer who does not know what the provider's instructions say, or who operates outside them, is in breach of Article 26(1) on day one of deployment.

When a deployer becomes the provider: Article 25

Article 25 is the provision that creates the most significant compliance risk for integrators and deployers operating in a grey zone. It provides that a deployer shall be considered a provider, with all the provider obligations that implies, in three circumstances.

The first is where the deployer places a high-risk AI system on the market or puts it into service under its own name or trademark. A company that buys access to a foundation model, builds an application on top of it, and markets that application under its own brand as a standalone product has placed a system on the market. It becomes the provider of that system for regulatory purposes, regardless of who built the underlying model.

The second is where the deployer makes a substantial modification to a high-risk AI system. The Act does not define substantial modification exhaustively, but Recital 66 describes it as any change that affects the system's compliance with the requirements of the Act or that alters the intended purpose for which the system was assessed. Fine-tuning that changes the risk profile of the system, integrating the system with another system in a way that creates new capabilities, or changing the deployment context to one outside the scope of the original conformity assessment are all candidates for substantial modification.

The third is where the deployer modifies the intended purpose of a system that was not originally classified as high-risk, with the result that the modified system is high-risk. A general-purpose model deployed for internal drafting is not high-risk. The same model deployed as a credit risk scoring tool, or as a CV screening system for hiring, is high-risk. The entity that made the deployment decision becomes the provider of a high-risk AI system and must complete conformity assessment before the deployment continues.

The importer and distributor layer

Articles 23 and 24 of the AI Act address importers and distributors. An importer is any party established in the EU that places a system from a non-EU provider on the EU market. A distributor is any party in the supply chain other than the provider or importer who makes a system available on the market without substantially modifying it.

Both are treated as providers for the purposes of the Act if they place a system on the market under their own name, make a substantial modification, or know or should know that the system does not comply with the Act and make it available anyway. In practice, this means that EU distributors of AI systems built by non-EU providers cannot assume that the provider handles compliance. They must verify that the conformity assessment has been completed, that the CE marking is in place, and that the technical documentation and declaration of conformity are available. If they cannot verify this, they are exposed.

Product liability and the chain

The revised Product Liability Directive, Directive 2024/2853, addresses the multi-party chain through two provisions. Article 7 identifies the manufacturer, its authorised representative, and the importer as the primary liability targets. Article 8 provides joint and several liability where multiple economic operators are each responsible for the same damage.

The joint and several liability provision is important. It means a claimant who suffers loss from an AI system can sue any party in the chain for the full amount of their loss. The parties then sort contribution between themselves in separate proceedings, based on their respective shares of responsibility. From the claimant's perspective, this is straightforward: sue the party with the deepest pockets or the most direct relationship, recover the full loss, and let that party pursue the rest of the chain.

In practice, the party with the most direct relationship to the claimant is almost always the deployer. The deployer sold the product or service, the deployer's name is on the terms of service, and the deployer is the entity the claimant knew they were dealing with. This means the deployer faces the initial claim, incurs the litigation costs, and must then seek contribution from the provider or integrator in a second proceeding, with all the discovery and attribution difficulties that implies.

The three attribution problems

There are three attribution problems that make the liability chain difficult to resolve in practice. The first is the training data attribution problem: where a model hallucination or bias originates in training data, identifying which party selected or allowed that data, and whether its presence made the model defective at the time of training, is technically demanding and legally contested.

The second is the fine-tuning attribution problem: where an integrator has fine-tuned a foundation model on proprietary data, a failure mode that emerges in the fine-tuned model may have been absent in the base model, or may have been amplified by the fine-tuning. Determining whether the failure originated in the base model or the fine-tuning requires access to both, and to the test results for each.

The third is the context attribution problem: where the deployer's specific use case, user population, or integration architecture produced a failure that neither the base model nor the fine-tuned model would have exhibited in a different context. This is the most common failure mode for agentic systems, where the agent's tool access, memory, and multi-step reasoning introduce behaviours that were not foreseeable from the components tested in isolation.

The Article 9 rebuttable presumptions in Directive 2024/2853 affect all three problems. If the claimant can show that an AI system produced an output causing harm, and that the harm is the kind a defective AI could plausibly cause, the burden shifts to the defendants to demonstrate where the defect was not. Each party in the chain must be able to rebut the presumption against itself, not merely point to another party.

What this means for documentation

The liability chain analysis points toward a specific documentation requirement for each party. Foundation model providers need documentation of their pre-deployment testing, their performance benchmarks, the boundaries of their intended use, and their post-market monitoring regime. This is the technical file that an Article 26 deployer is entitled to receive under the instructions for use, and that a court can compel under Article 10 of Directive 2024/2853.

Integrators need documentation of their fine-tuning process, the additional testing conducted on the integrated system, and the specific intended purpose and user population for which the integrated system was designed. Where the integration constitutes a substantial modification under Article 25 of the AI Act, the integrator needs a full conformity assessment file in addition.

Deployers need the minimum Article 26 file described in the operator obligations guide: a risk record, an oversight register, an instructions-for-use map, a logging schedule, and an incident protocol. The instructions-for-use map is particularly critical in the liability chain context: it is the document that shows where the deployer's usage sits relative to the provider's and integrator's stated operational boundaries, and it is the document that determines whether the deployer is operating within or outside those boundaries at the time of any incident.

For a structured approach to producing and maintaining these documents under a recognised certification framework, see the Agent Certified methodology, which provides a seven-dimension evaluation of AI agent deployments that maps to both the AI Act and the product liability framework.

The practical exposure point. The deployer carries the initial claim in almost every scenario, because the deployer has the direct user relationship. The deployer's ability to recover contribution from the provider or integrator depends on the contract, the documentation, and the facts of the failure. Neither the AI Act nor Directive 2024/2853 makes that recovery automatic.

What to put in AI supply chain contracts

Commercial agreements between parties in the AI supply chain cannot displace the mandatory liability provisions of Directive 2024/2853 with respect to third-party claimants. A deployer cannot contractually agree that it will not be liable to its users. But commercial agreements can set up the contribution framework that governs how the parties sort liability between themselves after a claim is resolved.

The key provisions for a deployer's contract with its model provider or integrator are: a representation that the system has undergone any required conformity assessment; an obligation on the provider to notify the deployer of any material change to the system that may affect compliance or safety; an obligation to provide the technical documentation and instructions for use in a form that the deployer can rely on for its own Article 26 compliance; an indemnity against losses arising from a defect in the underlying model that the deployer could not have discovered through reasonable diligence; and a cooperation obligation for regulatory investigations and civil claims.

These provisions do not eliminate the deployer's exposure. They create recourse rights that the deployer can exercise once it has settled a claim or been found liable. Whether those rights are valuable depends on the provider's financial capacity and the strength of the underlying contract. In a market where the dominant AI model providers are non-EU entities with limited EU regulatory presence, the enforceability and value of those recourse rights are open questions.

Insurance and the chain

The liability chain analysis explains why AI-specific insurance is structurally necessary rather than optional for serious operators. Standard commercial liability products, including cyber and professional indemnity, cover the entity named in the policy, but the coverage terms are being revised to exclude or limit autonomous AI activity, as documented elsewhere in this publication. Where the deployer faces a claim arising from an AI failure that originated upstream in the chain, the deployer's commercial policy may not respond, and the deployer must then pursue the provider or integrator while holding the initial loss.

An AI-specific liability product designed for the deployer's position in the chain would cover: losses arising from AI outputs regardless of where in the chain the root cause lies; the cost of pursuing contribution from upstream parties; and the costs of regulatory response to an AI Act enforcement action. The market for this product is forming. For a current view of what is available and what is not, see Agent Insured's briefing on European enterprise AI liability coverage.

Related reading

For the Article 26 operator obligations that define the deployer's position in the regulatory chain, see the 2026 operator obligations guide. For the product liability framework that creates the civil exposure at the end of the chain, see the Product Liability Directive 2024 briefing. For the three structural gaps in EU AI agent underwriting, see the liability framework. For the full documentation architecture, see how to document AI agent risk management for compliance.

Frequently asked questions

Under the EU AI Act, when does a deployer become liable as a provider?

Article 25 of Regulation (EU) 2024/1689 provides that a deployer who places an AI system on the market under its own name or trademark, who modifies the system's intended purpose beyond the provider's instructions, or who makes a substantial modification to a high-risk system is treated as the provider for the purposes of the Act and assumes all provider obligations. The threshold is not intent but effect.

Can a provider and deployer be jointly liable for the same AI-caused harm?

Yes. Under Article 8 of Directive 2024/2853, where multiple economic operators are each responsible for the same damage, they are jointly and severally liable. A claimant can recover the full amount from any one party, leaving the parties to sort contribution between themselves. The EU AI Act's regulatory penalties may also be imposed on the provider, the deployer, or both, depending on which party's obligations were breached.

What happens when no single party in the chain can prove the harm was caused by another party?

The Article 9 rebuttable presumptions in Directive 2024/2853 address this. Where the claimant faces excessive difficulty in establishing causation, the court may presume causation if the defect appears capable of causing the type of harm suffered. Each party in the chain needs documentation sufficient to rebut the presumption against itself, not merely to point to another party as the more likely cause.

How does the EU AI Act treat importers and distributors in the liability chain?

Articles 23 and 24 of Regulation (EU) 2024/1689 set obligations on importers and distributors of high-risk AI systems. Both are treated as providers if they place a system under their own name, make a substantial modification, or make a non-compliant system available. EU distributors of AI systems built by non-EU providers cannot assume that the provider handles compliance and must verify conformity assessment completion.

What contractual protections should deployers seek from AI providers?

Deployers should seek representations that the system has undergone required conformity assessment, obligations on the provider to notify of changes affecting compliance, obligations to provide complete technical documentation and instructions for use, an indemnity against losses from undiscoverable latent defects, and a cooperation obligation for investigations and claims. These sit alongside, and do not replace, the deployer's own Article 26 obligations.

References

  1. Regulation (EU) 2024/1689 (EU AI Act), Article 3(3), definition of provider.
  2. Regulation (EU) 2024/1689, Article 3(4), definition of deployer.
  3. Regulation (EU) 2024/1689, Article 25, obligations of deployers in certain cases.
  4. Regulation (EU) 2024/1689, Article 23, obligations of importers of high-risk AI systems.
  5. Regulation (EU) 2024/1689, Article 24, obligations of distributors of high-risk AI systems.
  6. Regulation (EU) 2024/1689, Article 26, obligations of deployers of high-risk AI systems.
  7. Regulation (EU) 2024/1689, Recital 66, definition of substantial modification.
  8. Directive 2024/2853, Article 7, economic operators liable under the directive.
  9. Directive 2024/2853, Article 8, joint and several liability.
  10. Directive 2024/2853, Article 9, rebuttable presumptions.
  11. Directive 2024/2853, Article 10, disclosure of evidence.
  12. AIUC. AIUC-1 Standard, first edition. Published July 2025. Referenced by ElevenLabs in connection with the first AIUC-1-backed policy, February 2026.