Five developments this week
- The Digital Omnibus proposes moving Annex III high-risk obligations to 2 December 2027. The amendment is in trilogue and has not been adopted. The 2 August 2026 date remains binding until formal adoption.
- Article 5 prohibitions, Article 50 transparency, and GPAI obligations under Articles 53 and 55 are outside the Omnibus scope and unchanged.
- HSB, Armilla via Chaucer, and Testudo all moved on AI liability coverage in the first quarter of 2026. Underwriting is not tracking the regulatory calendar.
- Italy enacted the first national AI implementing legislation in October 2025. Hungary, Lithuania, Finland, and Cyprus have designated supervisory authorities. Germany's KI-MIG draft is pending.
- Two US cases, Mobley v. Workday and Benavides v. Tesla, now sit as live precedent that insurers and counsel are referencing across both jurisdictions.
Story 1: The Digital Omnibus delay, precisely stated
The European Commission's Digital Omnibus legislative package entered trilogue on 28 April 2026. The central AI Act amendment in the package proposes extending the application date for the Annex III high-risk operator obligations under Chapter III, Section 3 from 2 August 2026 to 2 December 2027. This represents a sixteen-month extension of the window for deployers who have not yet completed their operator file.
Council and Parliament have, at the time of this issue, converged on the 2 December 2027 date. That convergence is a strong signal of eventual adoption. It is not adoption. Until the amending Regulation is formally published in the Official Journal of the European Union, the original application dates in Regulation (EU) 2024/1689 remain in full legal force. The Commission has indicated formal adoption is unlikely before July 2026.
The Omnibus package does not amend the Regulation comprehensively. It targets one specific transition date. Everything else in the Regulation, including its structure, penalty framework, and enforcement architecture, remains as enacted. A deployer who treats the trilogue convergence as equivalent to a formal extension and halts compliance work is taking a legal risk that is uncompensated by any protection the Omnibus will eventually offer.
The practical implication for compliance teams is direct: the work of building the operator file should continue at the pace required by the original August 2026 date. If the Omnibus adopts the extension before August, the operator file will already be complete, which is the correct posture regardless of the deadline. If adoption is delayed and the original date holds, the deployer will not be exposed.
Track the Omnibus trilogue weekly. Do not pause compliance work. Build the operator file to the August 2026 standard. A formally adopted extension is a bonus, not a basis for delay.
Source: European Commission Digital Omnibus package, trilogue 28 April 2026. Regulation (EU) 2024/1689, Article 113. Agent Liability EU Master Brief: 100-day operator checklist.
Story 2: What stays mandatory on 2 August 2026
Three categories of obligation are outside the Omnibus scope. Understanding what is not being delayed is as important as understanding what is.
Article 5 of Regulation (EU) 2024/1689 prohibits certain AI practices absolutely. These include biometric categorisation systems that infer protected characteristics, social scoring by public authorities, real-time remote biometric identification systems used in publicly accessible spaces by law enforcement without judicial authorisation, and systems that exploit vulnerabilities to influence behaviour. These prohibitions entered force on 2 February 2025. No extension has been proposed and none is expected. Deployers operating systems that touch these categories are already in the enforcement window.
Article 50 sets transparency obligations that apply when an AI system interacts with natural persons or generates synthetic content. These obligations attach to providers and deployers of systems deployed after 2 August 2026 and to all systems by 2 August 2026. The Omnibus does not propose any amendment to Article 50. Systems that interact with users through chat, voice, or content generation and do not carry Article 50 disclosures will be in breach from 2 August 2026 regardless of whether the Annex III extension is adopted.
Articles 53 and 55 set obligations for providers of general-purpose AI models. These include technical documentation, cooperation with the AI Office, and, for models with systemic risk, adversarial testing and serious incident reporting. These provisions became applicable on 2 August 2025 and have been in force for nine months at the time of this issue. The Omnibus does not propose any amendment to the GPAI framework.
The Product Liability Directive (Directive (EU) 2024/2853) is a separate legislative instrument. It requires Member States to transpose by 9 December 2026. It applies to AI software as a product, extends liability to economic operators across the supply chain, and introduces a disclosure mechanism that allows claimants to request technical documentation held by manufacturers. The Omnibus does not affect this transposition deadline.
Build the operator file regardless of the Annex III debate. Audit Article 50 disclosures in every AI-facing product. Confirm GPAI compliance if any foundation model is deployed or embedded. Verify that legal counsel has mapped the PLD transposition timeline in each Member State of operation.
Source: Regulation (EU) 2024/1689, Articles 5, 50, 53, 55, 113. Directive (EU) 2024/2853, Article 23. Official Journal of the European Union. EUR-Lex: Regulation 2024/1689.
Story 3: The insurance market is not waiting for the Omnibus
Three named market developments in the first quarter of 2026 indicate that the AI liability insurance market is accelerating independently of the regulatory timeline. Each is relevant to both operators seeking coverage and brokers advising on placement strategy.
On 18 March 2026, HSB (Hartford Steam Boiler, a Munich Re subsidiary) launched an AI Liability product aimed at SMBs. The product covers third-party claims arising from AI system errors, regulatory investigation costs, and incident response expenses. HSB's underwriting questionnaire explicitly addresses whether the insured has completed an AI risk assessment, which maps directly to the operator file obligations under Article 26. This is the first instance we have tracked of a named carrier making EU AI Act compliance documentation a direct input into underwriting.
On 10 February 2026, Armilla, operating through Chaucer's Lloyd's syndicates 1084 and 1176, launched its Vanguard AI programme with aggregate capacity reported above USD 25 million. The programme targets AI developers and deployers and includes coverage for technology errors and omissions, privacy liability, and regulatory defence costs. Armilla has previously published a framework for AI model risk evaluation that references EU AI Act classification categories as a structuring tool.
In March 2026, Testudo expanded its standalone AI coverage limit to USD 9.25 million. Testudo's product has a narrower focus on AI model performance risk and indemnification for deployers whose AI systems produce incorrect outputs that generate third-party claims.
The pattern across all three is consistent. Underwriters are moving on product development and capacity deployment at a pace that does not track the regulatory calendar. The practical consequence for brokers is that clients seeking AI-specific coverage in the second half of 2026 will encounter underwriters who are already familiar with the EU AI Act operator file concept and who treat the absence of that documentation as an adverse risk factor.
Approach renewal conversations with a documented compliance posture, not a summary of compliance intentions. Underwriters are asking for the operator file by name. Brokers advising technology-sector clients should check whether current placements include AI-specific endorsements or rely on general liability wordings that may be restricting AI claims through ISO forms CG 40 47 or CG 40 48.
Source: HSB press release, 18 March 2026. Armilla Vanguard AI announcement, 10 February 2026. Testudo coverage expansion announcement, March 2026. Agent Liability EU: AI policy exclusions analysis.
Story 4: Member State implementation is advancing on its own schedule
The Omnibus debate takes place against a background of Member State-level implementation that is moving independently. Operators focused only on the central Regulation date are missing a layer of legal exposure that is already crystallising.
Italy enacted Law No. 132 on 10 October 2025, published in the Gazzetta Ufficiale on the same date. Law No. 132/2025 designates the Agenzia per l'Italia Digitale (AGID) and the Agenzia per la Cybersicurezza Nazionale (ACN) as jointly competent national authorities under Article 70 of the AI Act, establishes the administrative procedures that will govern Article 99 enforcement proceedings in Italy, and sets additional notification requirements for operators of high-risk systems operating in regulated sectors. Italy's implementing law also includes provisions that go beyond the Regulation's minimum requirements in the area of AI transparency for public administration. Italy is the first EU Member State to enact comprehensive national implementing legislation.
Hungary, Lithuania, Finland, and Cyprus have each formally designated national supervisory authorities under Article 70, as required before enforcement can begin. Germany has published a draft of its KI-Marktuberwachungsgesetz (KI-MIG), which designates the Bundesnetzagentur as the primary market surveillance authority and sets national procedural rules for the conduct of investigations. The KI-MIG draft had not been enacted at the time of this issue.
The relevance for cross-border deployers is twofold. First, national implementing acts can add procedural obligations, notification requirements, and filing deadlines that are not visible from the text of the Regulation itself. A deployer compliant with the Regulation's minimum requirements may still be non-compliant with a Member State's implementing provisions. Second, enforcement will be conducted by national supervisors, and the procedural framework in each jurisdiction will shape how investigations are opened, how documentation is requested, and how penalties are calculated.
Map the supervisory authority designation and implementing legislation status in every Member State where high-risk AI systems are actively deployed. Do not rely on the Regulation text alone. Instruct local counsel in Italy, Germany, and any other jurisdiction where national implementing acts are in force or near enactment.
Source: Italian Law No. 132/2025, Gazzetta Ufficiale No. 237, 10 October 2025. AI Office national authority register. Agent Liability EU: Member State implementation tracker.
Story 5: US courts are setting precedent that crosses the Atlantic
Two US judicial decisions from 2025 have entered the reference files of insurers and counsel operating in the AI liability space on both sides of the Atlantic. Neither case turns on the EU AI Act, and both predate any enforcement action under the Regulation. Both are nonetheless shaping how underwriters model AI agent risk and how courts may approach causation questions in comparable EU litigation.
In May 2025, the United States District Court for the Northern District of California certified a collective action in Mobley v. Workday. The plaintiffs alleged that Workday's AI-assisted applicant screening product produced discriminatory hiring recommendations affecting Black, older, and disabled candidates at scale across multiple client organisations. The court's certification decision addressed the question of whether harms generated by a shared AI system affecting many individuals across multiple employer relationships could be pursued as a class. The court found that they could, on the basis that the AI system's operation, not the individual employment decisions, was the common thread. For EU operators, the Mobley certification establishes that AI-mediated discrimination claims can aggregate at scale and that the operator of the AI system, not just the deployer's client, is a potential respondent.
In September 2025, a California Superior Court jury returned a verdict of USD 329 million in Benavides v. Tesla. The case arose from a fatality involving a Tesla vehicle operating in a supervised autonomy mode. The verdict turned on the question of whether the supervised autonomy system was a product, whether Tesla had adequately disclosed its operational limitations, and whether the company's representations about the system's capabilities created a duty of care that was breached. The USD 329 million figure is the largest jury verdict in a case directly addressing autonomous system failure. For AI liability underwriters, the Benavides verdict is a calibration event: it establishes that jury exposure for autonomous system failures is an order of magnitude above what general liability policy limits have historically been set to absorb.
The cross-jurisdictional relevance of both cases lies in their potential to influence how EU courts approach analogous claims under the Product Liability Directive's expanded framework once it is transposed, and in their direct effect on US reinsurance pricing that flows back through Lloyd's and other European carriers.
Insurance posture should reflect both EU regulatory risk and US litigation exposure. Transatlantic organisations face the full risk simultaneously. Verify that policy limits across all AI-touching product lines are calibrated against the Benavides benchmark, not historic general liability norms.
Source: Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal.), class certification order May 2025. Benavides v. Tesla, Inc., Cal. Superior Court, verdict September 2025. Agent Liability EU: liability framework reference.
Closing note
The Digital Omnibus delay, if and when it is formally adopted, will give a substantial proportion of the European operator market more time to prepare. The compliance programme that produces a complete operator file by July 2026 will be in a stronger position with underwriters, with national supervisors, and with clients than one that treats any proposed extension as a reason to pause. The five developments in this issue share a common thread: the legal, insurance, and judicial environment around AI liability is advancing on its own momentum, and the regulatory calendar is one input among several.
Issue 002 of The Authority Stack Briefing publishes on Tuesday 5 May 2026. If you received this issue from a colleague, subscribe at agentliability.eu/briefing/.
Editorial firewall. The Authority Stack Briefing is editorially independent. Future Proof Intelligence does not receive payment from carriers, vendors, or regulatory bodies for editorial coverage. Carrier and product references in this issue are included on the basis of public announcements and verified market reporting only. Nothing in this publication constitutes legal, financial, or regulatory advice. Readers should consult qualified professionals before making compliance or coverage decisions.
The Authority Stack Briefing is published every Tuesday by Future Proof Intelligence. Archive at agentliability.eu/briefing/. Editorial standards at agentliability.eu/editorial-standards.html. Contact: editors@agentliability.eu.