Today is 24 April 2026. There are exactly 100 days until 2 August 2026, when the operator provisions of the EU AI Act enter application. This is the checklist, week by week, from now to the day the deadline lands.
Key takeaways
- The 100-day countdown begins today, 24 April 2026. The 2 August 2026 deadline is fixed. There is no extension mechanism in the Regulation.
- The operator provisions centre on three articles: Article 26 (deployer duties), Article 27 (fundamental rights impact assessment for qualifying deployers), and Article 50 (transparency obligations toward natural persons).
- Penalty exposure under Article 99 tier two reaches EUR 15 million or 3 per cent of worldwide annual turnover, whichever is higher, for breaches of the Article 26 obligations.
- Five duties in Article 26 cannot be transferred by contract to the provider. They attach to the deployer and are owed to the national supervisor regardless of what the provider's terms of service say.
- The minimum operator file is five documents: risk record, oversight register, instructions-for-use map, logging schedule, and incident protocol.
- Public deployers face a separate registration obligation under Article 71 in the EU database for high-risk AI systems maintained by the AI Office.
- The specialist insurance market is bifurcating: carriers designed for AI operational risk are building coverage, while broad general liability and professional indemnity underwriters are tightening AI exclusion endorsements under ISO forms CG 40 47 and CG 40 48.
Section 1. Where you stand on day 100
As of April 2026, the market picture from law firm tracker reports and advisory firm readiness surveys is consistent on one finding: a substantial proportion of deployers in scope of the August 2026 obligations have not begun producing the operator file. Estimates from European law firm surveys published in Q1 2026 place self-reported readiness among high-risk deployers at below 30 per cent, with smaller organisations and public sector bodies the furthest behind.
The reasons cited in those surveys are predictable. Many deployers are still uncertain whether their systems are high-risk under Article 6 and Annex III. Others have waited for the AI Office to publish its guidance materials before taking action, not recognising that the obligation to comply exists independently of any guidance the AI Office eventually produces. A third group has delegated compliance preparation to its provider relationship, incorrectly assuming that the provider's conformity assessment satisfies the deployer's own obligations.
None of these positions provides legal cover from 2 August. The provisions apply when they apply. A deployer that has not produced a risk record, named oversight persons, or documented its logging practices on that date will be in breach, regardless of the reason. One hundred days is a short window for building a compliance programme from scratch. It is enough for a focused organisation to produce the minimum file and rehearse the incident protocol. It is not enough for organisations that continue to wait.
Section 2. The five duties you cannot delegate
Article 26 sets seven sub-paragraphs of deployer obligation. Two of them relate to categories of deployer that are not universally applicable: Article 26(7) on worker information applies only in employment contexts, and Article 26(8) on registration applies only to public authorities. The five obligations that apply to every deployer of a high-risk AI system in a professional context are the ones that generate most compliance work and carry the most enforcement risk.
The table below describes each of the five operationally critical duties, what the obligation requires in practice, what counts as evidence that it has been satisfied, and what disqualifies a deployer from claiming compliance.
| Duty | Article | What it requires | Qualifying evidence | Common disqualifiers |
|---|---|---|---|---|
| Use within instructions for use | Art. 26(1) | Deploy the system only within the parameters documented by the provider, including intended purpose, operating environment, and performance limits. | An instructions-for-use map showing the provider's stated limits alongside actual usage, with deviations flagged and resolved or justified. | No provider documentation obtained; system used for purposes not listed by the provider; jurisdiction of use differs from provider's stated scope without documented assessment. |
| Human oversight by named persons | Art. 26(2), Art. 14 | Assign named, competent, trained, and authorised individuals to exercise oversight over the system. The oversight function must be staffed, not just formally assigned. | Oversight register listing named persons, their documented training, their authority level, and the escalation path to a senior decision maker. | Oversight role assigned to a team or department without named individuals; no training record; no escalation path documented; oversight person lacks authority to halt or suspend the system. |
| Input data relevance | Art. 26(4) | Where the deployer controls input data, ensure it is relevant and sufficiently representative for the system's intended purpose. | Documented assessment of input data sources, their representativeness, and the process for verifying relevance at each operational cycle. | No assessment of input data quality performed; deployer connects its own customer data to the system without reviewing it against the provider's training data scope; no process for updating the assessment when input sources change. |
| Monitoring and incident reporting | Art. 26(5) | Monitor the system during operation. Report serious incidents within the meaning of Article 3(49) to the provider and, where applicable, to the market surveillance authority. Suspend use where Article 79 risks are identified. | Written incident protocol with named contacts for the provider and the relevant national market surveillance authority; documented monitoring cadence; evidence that the protocol has been tested. | No monitoring process; incident defined only as customer complaint rather than by Art. 3(49) criteria; no written escalation path to the market surveillance authority; protocol untested before activation. |
| Log retention | Art. 26(6) | Retain automatically generated logs for at least six months, or longer where sectoral law requires it, in a form that can be produced on request. | Logging schedule specifying: what fields are captured, where they are stored, the retention period, and the retrieval procedure for supervisor requests. | Logs not retained or automatically purged before six months; log format not readable by an external reviewer; no documented retrieval procedure; logs lost during system migration. |
Section 3. The 100-day, week-by-week countdown
What follows is a structured countdown from 24 April 2026 to 2 August 2026. Each pair of weeks covers a defined phase of the compliance programme. The sequencing is deliberate: the risk record comes first because every other document refers to it. The oversight register comes second because the persons it names will carry the incident protocol. The operational documents follow. The final two weeks are for review, sign-off, and file freeze.
This is the minimum viable sequence for a deployer starting from scratch. Deployers that have already completed earlier phases should map their current state against the relevant week and accelerate from that point.
Weeks 1 and 2: 24 April to 7 May
Phase: Scope and gap analysis
- Inventory every AI system currently in active use across the organisation. Include systems operated by third-party service providers where the deployer is the contracting entity using the output in a professional context.
- For each system, assess whether it is high-risk under Article 6(2) and Annex III. The high-risk categories include systems used in recruitment and worker management, access to education, credit and insurance risk scoring, administration of essential services, law enforcement, migration and border control, and administration of justice.
- For each system classified as high-risk, obtain the provider's technical documentation, instructions for use, and conformity assessment summary. If the provider has not supplied these, request them formally in writing and document the request and the response.
- Produce a gap register: for each in-scope system, which of the five operator file documents currently exists, and which is absent. This register drives the rest of the programme.
- Identify whether any in-scope system is operated in an employment context (triggering Art. 26(7) worker information duties) or by a public authority (triggering Art. 26(8) registration under Art. 71).
Weeks 3 and 4: 8 May to 21 May
Phase: Risk record and oversight register draft
- Draft the risk record for each in-scope system. The risk record is the deployer's own description of the system: its intended purpose, its Annex III classification, the risks identified in the provider's documentation, and any additional risks arising from the deployer's specific usage context.
- Confirm the system's classification under Article 6(2). If the system is a general-purpose AI model being used in a high-risk context, the deployer bears the classification responsibility under Article 25.
- Identify candidate oversight persons for each system. Assess their current competence against the requirements of Article 26(2): do they have the technical knowledge to understand the system's outputs, the authority to intervene or halt use, and access to the support needed to exercise that authority?
- Define the training programme needed to close any competence gap. The training must be documented and completed before 2 August.
- Open the oversight register. For each in-scope system, record the name, role, training status, authority level, and escalation path for each oversight person. The escalation path must reach a named senior decision maker, not a generic title.
Weeks 5 and 6: 22 May to 4 June
Phase: Instructions-for-use mapping and logging schedule
- For each in-scope system, produce a side-by-side mapping of the provider's stated operational parameters against the deployer's current usage. The parameters include: intended purpose, geographic scope, user population, input data types, operational environment, and performance thresholds.
- Flag every point of divergence. For each divergence, make a decision: bring usage back within the stated parameters, or document the risk assessment supporting continued out-of-scope usage and accept the consequences under Article 26(1).
- Escalate divergences that cannot be resolved at an operational level to senior management. These are compliance decisions, not technical ones.
- Draft the logging schedule. Coordinate with the data engineering team and the provider to identify: which fields are automatically generated by the system, which of those are captured and retained, where they are stored, in what format, and for how long.
- Confirm the retention period against the six-month minimum floor in Article 26(6) and any sectoral law that sets a longer period. In financial services and healthcare, sectoral retention rules will typically require longer periods than the six-month floor.
Weeks 7 and 8: 5 June to 18 June
Phase: Incident protocol and FRIA where applicable
- Draft the incident protocol. The protocol must address: how a serious incident within the meaning of Article 3(49) is identified during normal operation; who is notified internally and in what sequence; how the provider is contacted and what information is shared; which national market surveillance authority receives the external report and how; and at what threshold the deployer suspends use of the system pending investigation.
- Confirm the identity of the relevant market surveillance authority in each Member State where the system operates. Several Member States have designated their data protection authority for certain high-risk categories. Contact details should be included in the incident protocol by name, not by generic description.
- For deployers in scope of Article 27, complete or formally commission the Fundamental Rights Impact Assessment. The FRIA must address: the process for using the system, the persons likely to be affected, the specific risks of harm to those persons across the full spectrum of rights in the EU Charter, the oversight and mitigation measures in place, and the action plan if risks materialise.
- If a DPIA has already been completed under GDPR Article 35, map it against the FRIA requirements under Article 27(4). The FRIA complements but does not replace the DPIA, and the reverse is also true.
- Identify the supervisor responsible for receiving the FRIA notification under Article 27(3) and confirm the notification procedure in that jurisdiction.
Weeks 9 and 10: 19 June to 2 July
Phase: Live readiness rehearsal
- Run a tabletop incident exercise. Present an example event that would constitute a serious incident under Article 3(49), and walk the oversight team through the incident protocol from identification to external reporting. Time the exercise. Identify every gap in the chain.
- Test log retrieval. A member of the oversight team should request a log extract covering a defined period, and the data engineering team should produce it. The test is complete when the extract can be delivered in a format readable by an external reviewer within a defined time window.
- Verify that each named oversight person can articulate in their own words: what the system does, what decisions it informs, when they would intervene, and how they would halt or suspend the system. If they cannot, the oversight register entry is not complete and additional training is needed before August.
- Update all five operator file documents based on the findings of the rehearsal exercise. Version-control the updates and record the date of revision.
- If the FRIA is in scope, confirm that it has been completed and is ready for supervisor notification. The notification procedure will vary by jurisdiction and by the supervisor's stated preferred channel.
Weeks 11 and 12: 3 July to 16 July
Phase: Supplier and insurance review
- Review every contract with AI system providers to confirm that the terms do not purport to transfer the deployer's Article 26 obligations to the provider. Flag any language suggesting that the provider's conformity assessment satisfies the deployer's regulatory obligations. This language is not correct and should not be relied upon.
- Confirm that the provider's instructions for use are current and have not been superseded by a system update. A system update that changes the operational parameters triggers a re-run of the instructions-for-use mapping from Weeks 5 and 6.
- Review current insurance coverage for alignment with AI operational risk. Standard general liability and professional indemnity policies are being tightened by carriers. The ISO CG 40 47 and CG 40 48 endorsements narrow coverage for AI-originating claims. Identify the specific exclusions in current policies before August.
- Engage a specialist broker or direct carrier with documented AI liability underwriting capacity if current coverage has material gaps. The specialist market includes HSB (a Munich Re subsidiary), Armilla, and AIUC among others. See Section 6 of this article for a fuller discussion of the insurance market.
- Confirm that the insurer has been informed of the AI systems the organisation operates, particularly those classified as high-risk. Failure to disclose material AI operational risk to a liability insurer may void coverage at point of claim.
Weeks 13 and 14: 17 July to 30 July
Phase: Final documentation, board sign-off, and file freeze
- Commission a final legal review of all five operator file documents. The review should assess: whether the risk record accurately reflects the Annex III classification; whether the oversight register names persons with the authority and training Article 26(2) requires; whether the instructions-for-use map is complete and no unresolved divergences remain; whether the logging schedule meets the six-month floor and any applicable sectoral retention requirement; and whether the incident protocol names the correct market surveillance authority and a realistic suspension threshold.
- Obtain board or senior management sign-off on each document. The sign-off is not a formality. It is the organisation's formal acknowledgement that it has satisfied the Article 26 duties. It also creates the organisational accountability that a supervisor will look for in an early inquiry.
- Freeze the file version. The file that exists on 2 August should be the file that was reviewed and signed off. A compliance file that is still being revised on the day the provisions apply raises questions about whether the obligations were in fact satisfied before activation.
- If the deployer is a public authority, confirm that the system has been registered in the EU database under Article 71 and that the registration record is accurate.
- If the system is deployed in an employment context, confirm that the worker information notice required by Article 26(7) has been issued to worker representatives and affected workers, and document the date and method of notification.
- On 30 July, conduct a final readiness confirmation. Each document exists. Each oversight person is briefed. Each log is being retained. The incident protocol has been tested. The file is ready for production if a supervisor asks for it on 2 August.
Section 4. The minimum operator file
The operator file is not a single document and it is not a form published by the AI Office. It is a set of five documents that a deployer must be able to produce to a supervisor, an auditor, or an insurer on request. The AI Office has been mandated to publish a questionnaire template to assist with the FRIA under Article 27(5), but that template assists with one of the five documents in specific circumstances. It does not define or replace the others.
Each of the five documents has a defined scope and a defined relationship to the others. They are designed to be read as a set.
- The risk record. A concise description of the AI system, its intended purpose, its Annex III classification, the risks identified in the provider's technical documentation, and any additional risks arising from the deployer's specific operating context. The risk record is the deployer's own reading of the system: it is not the provider's documentation reproduced, and it is not the conformity assessment summary. It reflects the deployer's judgement about the risks it is assuming by operating this system in this context. The risk record should be no longer than it needs to be. Supervisors reviewing it are looking for evidence that the deployer has understood the system and thought about the risks, not for an exhaustive technical treatise. The record must be updated whenever the system's operational parameters change materially.
- The oversight register. A structured record of the natural persons responsible for human oversight of the system under Articles 26(2) and 14. For each oversight person, the register records: their name and role, the specific system they oversee, the training they have completed (dates, content, and provider), the authority they hold to intervene, correct, or suspend the system, and the escalation path that connects them to a named senior decision maker. The register is a living document. If an oversight person leaves the organisation, the register must be updated before the system continues in operation. An oversight register that lists a department or a committee without naming individuals does not satisfy Article 26(2).
- The instructions-for-use map. A document that sets out, for each in-scope system, the provider's stated operational parameters alongside the deployer's actual usage. Where the two align, the map records the alignment. Where they diverge, the map records the divergence, the risk assessment supporting continued operation, and the decision-maker who authorised it. The instructions-for-use map is one of the most diagnostically useful documents in the operator file because it shows supervisors immediately whether the deployer is operating the system within its designed parameters. It is also the document most likely to reveal compliance problems that a deployer did not know it had, specifically the use of a system for purposes beyond its documented intended use.
- The logging schedule. A document specifying which fields are automatically generated by the system and captured by the deployer, where the logs are stored, in what format, how long they are retained, and how they can be retrieved for supervisor review. The schedule must address: the six-month retention floor in Article 26(6); any longer retention period required by sectoral law; the procedure for log production when a supervisor or auditor requests it; and the process for preserving log integrity during system migrations. Deployers in financial services, healthcare, and other regulated sectors should obtain legal advice on the interaction between the AI Act retention floor and existing sectoral retention obligations, which will almost always be more demanding.
- The incident protocol. A written procedure for identifying, escalating, reporting, and responding to serious incidents under Article 26(5). The protocol must define: what constitutes a serious incident by reference to Article 3(49) of the Regulation, not by reference to customer complaints or internal service level thresholds; the internal escalation chain from the point of identification to the decision to report; the procedure for notifying the provider; the identity and contact details of the relevant national market surveillance authority in each jurisdiction where the system operates; the form of the external notification and the information it must contain; the threshold at which the deployer suspends use of the system pending investigation; and the procedure for resuming use after suspension. The protocol must be tested before 2 August 2026. An untested incident protocol is a document. A tested one is a procedure.
These five documents are the minimum. Deployers in scope of Article 27 must also hold a completed and current Fundamental Rights Impact Assessment. Deployers in employment contexts must hold records of the worker information notices issued under Article 26(7). Public authorities must hold registration confirmations under Article 71. None of these additional documents replaces the five core documents. They supplement them.
Section 5. Penalties and enforcement architecture
Article 99 of Regulation (EU) 2024/1689 establishes three tiers of financial penalty. The first tier, covering breaches of the Article 5 prohibitions on unacceptable risk practices, reaches EUR 35 million or 7 per cent of worldwide annual turnover. The second tier, which applies to the Article 26 deployer obligations and other provisions applicable to operators, reaches EUR 15 million or 3 per cent. The third tier, applying to the provision of incorrect or incomplete information to supervisors, reaches EUR 7.5 million or 1 per cent.
The second tier is the relevant one for deployers who are not ready on 2 August. A deployer that cannot produce an operator file on request is not facing a warning. It is facing a formal inquiry that may result in a fine calibrated to those ceilings.
Article 99(4) instructs supervisors to consider the following factors when setting penalty levels: the nature, gravity, duration, and intentionality of the infringement; the degree of responsibility of the deployer and its technical and financial capacity; any benefits it derived from the infringement; whether other penalties have already been imposed for the same facts; and whether the deployer cooperated with the investigation. Article 99(6) adds the instruction that for SMEs and startups, penalties must be set with regard to their economic viability. This is not an exemption but a calibration factor.
The first enforcement actions by national market surveillance authorities are expected to follow a pattern familiar from early GDPR enforcement. Supervisors in larger Member States with dedicated AI supervisory capacity are likely to move first. The triggers for an inquiry are also familiar: a complaint from an affected person or a worker representative, a referral from a sectoral regulator or a data protection authority following a separate investigation, a press report or public disclosure of an AI-related incident, and proactive monitoring activities by the supervisor in high-risk sectors such as recruitment, credit, and public benefit administration.
Cooperation in an inquiry is not just a courtesy. It is a statutory factor that supervisors must consider when setting penalty levels. A deployer that responds promptly, produces its operator file without delay, and explains the steps it is taking to address any identified deficiency will face a materially different outcome than one that does not.
Section 6. The insurance dimension
The compliance readiness deficit described in Section 1 is not invisible to the insurance market. Underwriters writing technology, professional liability, and product liability policies are aware that 2 August 2026 will create a class of deployers operating AI systems without the documentation, oversight, or incident procedures their own policies may require.
The specialist AI liability insurance market is bifurcating. On one side, carriers with dedicated AI underwriting teams, including HSB (a Munich Re subsidiary writing AI operational risk), Armilla (writing third-party AI reliability insurance with model performance data requirements), AIUC (a specialist AI underwriting company with USD 15 million in seed funding), and Testudo (writing AI incident response coverage) are building products designed for the exposures that AI operators actually carry. On the other side, broad general liability and professional indemnity underwriters are restricting their exposure to AI-originating claims through endorsements, most notably ISO form CG 40 47 (excluding bodily injury and property damage arising from AI) and ISO form CG 40 48 (excluding professional liability arising from AI).
The consequence for deployers is that a policy purchased before the specialist market matured may not respond to a claim arising from an AI-related incident, a regulatory fine, or a third-party action following an autonomous agent error. The CG 40 47 and CG 40 48 endorsements were introduced into the US market in 2024 and are spreading into European policy wordings in 2025 and 2026. A deployer that has not reviewed its policy schedule for these endorsements since acquiring its current coverage should do so before August.
Munich Re's AI Act coverage product, developed through its aiSure programme, is among the more mature specialist products available to European deployers. Armilla's approach of requiring model evaluation data as a condition of coverage is instructive: it treats the operator file as an underwriting input, not an optional document. Deployers who can produce a complete operator file are more insurable, on better terms, than those who cannot.
The interaction between the operator file and insurability is not coincidental. Insurers writing AI liability coverage are building their underwriting criteria around the same documentation structure that the AI Act requires. A deployer that builds a compliant operator file is simultaneously building the documentation set that insurers will use to price and scope coverage.
Section 7. Cross-Member-State variation in supervisory authority
The designation of national market surveillance authorities for the AI Act provisions is underway across Member States, but the process is not complete and published positions vary significantly. The table below summarises the designations and published positions for the Member States with the highest concentration of high-risk AI deployers as of April 2026.
| Member State | Designated Authority (primary) | Published Position | Notable Guidance |
|---|---|---|---|
| France | CNIL (Commission Nationale de l'Informatique et des Libertés) | Designated for systems involving personal data in several Annex III categories. CNIL has published preliminary guidance on the interaction between AI Act and GDPR deployer obligations. | CNIL has indicated that it will treat the FRIA as the first document requested in an inquiry where Article 27 applies. Published guidance available on cnil.fr. |
| Germany | BfDI (Bundesbeauftragte für den Datenschutz und die Informationsfreiheit) with sectoral co-designation | Germany has designated the BfDI for AI systems involving personal data while retaining sectoral supervisors (BaFin for financial applications, BSI for cybersecurity-adjacent systems) for their respective domains. | No consolidated deployer guidance published as of April 2026. BaFin has published preliminary remarks on AI governance in the financial sector aligned with the EIOPA AI Governance Opinion. |
| Netherlands | Autoriteit Persoonsgegevens (AP) | The AP has been designated as the primary supervisory authority for AI Act provisions intersecting with personal data processing. The AP has been active in AI governance since 2022 and has published statements on its enforcement priorities. | AP has signalled that recruitment, social benefit, and creditworthiness applications will be among its first enforcement focuses. Deployers in those sectors in the Netherlands should treat the AP as their primary contact for FRIA notifications. |
| Ireland | Data Protection Commission (DPC) | The DPC has been designated for AI Act provisions involving personal data, a significant designation given the volume of EU operations of major technology companies registered in Ireland. | No dedicated AI Act deployer guidance published as of April 2026. DPC guidance on GDPR AI intersections provides some relevant context. Deployers registered in Ireland with EU-wide operations should confirm the DPC designation and the cross-border interaction with other national supervisors. |
| Italy | AgID (Agenzia per l'Italia Digitale) as coordinating body, with Garante (GPDP) for personal data intersections | Italy has taken a coordinating approach with multiple authorities retaining jurisdiction by sector. AgID is the primary contact for public sector AI applications. | No consolidated guidance as of April 2026. Italy's public sector is a significant deployer of AI systems in administrative and benefit contexts and faces the Article 26(8) registration obligation for public authority deployments. |
| Spain | AESIA (Agencia Española de Supervisión de la Inteligencia Artificial) | Spain has established AESIA as a dedicated AI supervisory authority, one of the first dedicated AI supervisors in the EU. AESIA has been preparing for the August 2026 provisions and has published preliminary sector-specific guidance. | AESIA has indicated that it will prioritise high-volume deployer sectors including recruitment, creditworthiness, and public benefit administration in its first enforcement cycle. Deployers in Spain should engage with AESIA's published guidance directly. |
Deployers operating across multiple Member States must identify the relevant supervisor in each jurisdiction. The obligation applies in each Member State where the system is used, and the supervisory designation may differ by Member State and by the category of the system. Where uncertainty exists about the relevant supervisor, legal advice from counsel in the operating jurisdiction is the appropriate course.
Section 8. What we expect to see between day 100 and day 0
These are projections based on observed market patterns, regulatory timelines, and the enforcement trajectories of comparable regimes. They are not predictions.
Best case: orderly compliance
In the best case scenario, the availability of the week-by-week checklist structure, the AI Office's eventual publication of the FRIA questionnaire template, and the legal and advisory industry's increasing volume of deployer-facing guidance combine to produce an orderly compliance wave. A meaningful proportion of high-risk deployers complete their operator files by late July 2026. Supervisors open a small number of inquiry procedures in Q4 2026 based on complaint referrals, producing measured enforcement decisions that clarify the standard without triggering disproportionate market disruption. The insurance market completes its bifurcation and specialist coverage becomes broadly accessible by Q1 2027.
Median case: most deployers behind
The more likely trajectory, based on current readiness surveys, is that 2 August 2026 arrives with the majority of smaller high-risk deployers still without a complete operator file. Supervisors receive a volume of complaint referrals from workers and affected persons in the immediate period after activation, generating a caseload they may not have the capacity to address simultaneously. Enforcement is selective and slower than the Regulation's framework implies. A first wave of enforcement decisions emerges in Q1 2027, focused on deployers in politically visible sectors where individual harm is most legible. The uncertainty of the median period creates adverse selection in the insurance market: deployers with files get better coverage terms; those without them find specialist coverage expensive or unavailable.
Worst case: high-profile early enforcement actions
In the worst case, a significant AI-related incident in a high-risk sector in Q3 or Q4 2026 triggers an enforcement inquiry that results in a large and publicly disclosed fine before the end of 2026. The incident and the fine generate political pressure on supervisors in other Member States to accelerate their own enforcement programmes. Deployers that were quietly waiting for clarity find themselves named in inquiries before they had anticipated action would arrive. The insurance market tightens further in response, with specialist carriers raising premiums and tightening conditions for new business in the affected sectors.
The best mitigation against the worst case is the same as the best mitigation against the median case: a complete operator file, a tested incident protocol, and a named set of oversight persons who know what they are supposed to do. The deployers that reach 2 August with those three things in place are, in each scenario, materially better positioned than those that do not.
Section 9. Frequently asked questions
When is the EU AI Act deadline?
The operator provisions of the EU AI Act apply from 2 August 2026. This is the date on which the Chapter III deployer obligations under Regulation (EU) 2024/1689 enter into application for high-risk AI systems classified under Article 6 and Annex III. A second tier covering systems embedded in regulated products applies from 2 August 2027. There is no grandfathering period for systems already in deployment.
What happens on 2 August 2026?
On 2 August 2026, the deployer obligations set out in Articles 26, 27, and 50 of Regulation (EU) 2024/1689 become enforceable. Any deployer operating a high-risk AI system classified under Annex III without the required operator file, oversight structure, logging schedule, and incident protocol will be in breach from that date. National market surveillance authorities may open inquiries, request documentation, or impose fines under Article 99.
Who must comply with EU AI Act Article 26?
Article 26 applies to every deployer, defined in Article 3(4) as any natural or legal person, public authority, agency, or other body that uses a high-risk AI system under its authority in a professional context. This covers private businesses, public sector bodies, and third-sector organisations. Purely personal, non-professional use is excluded. The obligation attaches to the deployer regardless of where the provider of the system is located.
What are the EU AI Act operator obligations?
Article 26 of Regulation (EU) 2024/1689 sets seven deployer obligations: use the system within the provider's instructions for use; assign named, trained, authorised oversight persons; verify input data relevance; monitor operation and report serious incidents; retain logs for at least six months; inform workers before employment-context deployments; and register use in the EU database if the deployer is a public authority. None of these obligations can be transferred to the provider by contract.
What documents must a deployer have on file by 2 August 2026?
The minimum operator file comprises five documents: a risk record describing the system and its classification; an oversight register naming persons responsible with their training and authority; an instructions-for-use map aligning provider operational limits with actual usage; a logging schedule specifying what is captured, stored, and for how long; and an incident protocol describing how serious incidents under Article 26(5) are identified, escalated, and reported. Deployers in scope of Article 27 must also hold a completed Fundamental Rights Impact Assessment.
Can a contract transfer EU AI Act operator duties to the provider?
No. The obligations in Article 26 are owed directly to the national market surveillance authority and cannot be transferred or waived by contract. A provider's terms of service, an indemnity clause, or a data processing agreement may allocate commercial risk between the parties, but it has no effect on the regulatory duty. A deployer that relies on contractual language to avoid preparing the operator file will be in breach from 2 August 2026.
What are the EU AI Act penalties?
Breaches of the Article 26 deployer obligations fall within the second tier of the penalty structure in Article 99, with fines up to EUR 15 million or 3 per cent of worldwide annual turnover, whichever is higher. Supervisors must weigh the nature, gravity, and duration of the breach, the size and market position of the deployer, and whether the deployer cooperated with the inquiry. Article 99(6) provides that penalties for SMEs must take their economic viability into account, but this is a calibration factor, not an exemption.
Does the EU AI Act apply to small businesses?
Yes. The Article 26 obligations apply to every deployer operating a high-risk AI system in a professional context, including SMEs and startups. There is no size-based exemption from the compliance obligations. Article 99(6) instructs national supervisors to consider economic viability when setting the level of a fine, but the obligation to hold an operator file, maintain oversight, and retain logs applies to a ten-person company on the same legal basis as it applies to a listed corporation.
Is the FRIA required by 2 August 2026?
Yes, for deployers within scope of Article 27. The obligation applies to three categories: bodies governed by public law; private entities providing public services such as education, healthcare, housing benefit administration, and social assistance; and deployers of systems used for creditworthiness assessment (Annex III point 5(b)) or life and health insurance risk pricing (Annex III point 5(c)). The assessment must be completed before first deployment. For systems already running on 2 August 2026, it must be completed before that date.
What insurance covers EU AI Act compliance failures?
The specialist AI liability insurance market is bifurcating. Specialist carriers, including HSB (a Munich Re subsidiary), Armilla, and AIUC, are building products designed for AI operational risk. Broad general liability and professional indemnity carriers are tightening exclusions, including ISO form CG 40 47 and CG 40 48 endorsements, to remove AI-originated claims from standard coverage. Deployers seeking coverage for regulatory fines, third-party claims arising from AI errors, and incident response costs should engage brokers with AI-specific technical expertise, as standard policy wordings do not reliably respond to these exposures.
Section 10. Use the tools
FRIA Generator
The FRIA Generator at agentliability.eu walks deployers through the seven elements of the Fundamental Rights Impact Assessment required by Article 27(1)(a)-(g). It produces a structured output document that maps to the supervisor notification requirement under Article 27(3) and is designed to complement rather than duplicate an existing DPIA under GDPR Article 35. If you are in scope of Article 27 and have not started your FRIA, the generator is the fastest way to produce a complete first draft.
Use the FRIA GeneratorReadiness Scorecard
The Readiness Scorecard at agentliability.eu runs a deployer through a structured assessment of its current operator file status across all five documents, the oversight register, the logging schedule, the incident protocol, and, where applicable, the FRIA. The output is a gap register with a readiness score and a prioritised action sequence. If you are unsure where your organisation stands at day 100, the scorecard produces a baseline in under 30 minutes that you can use to structure the remaining weeks.
Take the Readiness ScorecardRelated reading
References
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 12.7.2024.
- Article 2, Regulation (EU) 2024/1689. Scope of application, including the exclusion of purely personal, non-professional use.
- Article 3(4), Regulation (EU) 2024/1689. Definition of deployer.
- Article 3(49), Regulation (EU) 2024/1689. Definition of serious incident, covering death, serious harm to health, significant disruption of critical infrastructure, and serious damage to property or the environment.
- Article 6 and Annex III, Regulation (EU) 2024/1689. Classification criteria for high-risk AI systems, including use in recruitment, creditworthiness assessment, access to essential services, law enforcement, and administration of justice.
- Article 14, Regulation (EU) 2024/1689. Human oversight requirements: design of oversight capability and staffing by the deployer of competent, authorised persons.
- Article 25, Regulation (EU) 2024/1689. Responsibilities of the deployer when using a general-purpose AI model or system in a high-risk context.
- Article 26, Regulation (EU) 2024/1689. Obligations of deployers of high-risk AI systems, sub-paragraphs (1) through (8).
- Article 27, Regulation (EU) 2024/1689. Fundamental rights impact assessment for deployers of certain high-risk AI systems, including the seven-element structure and supervisor notification requirement.
- Article 50, Regulation (EU) 2024/1689. Transparency obligations toward natural persons interacting with AI systems.
- Article 71, Regulation (EU) 2024/1689. EU database for high-risk AI systems: registration obligation for public authorities and bodies acting on their behalf.
- Article 79, Regulation (EU) 2024/1689. Conditions under which a deployer must suspend use of a high-risk AI system following identification of risk.
- Article 99, Regulation (EU) 2024/1689. Penalties: three-tier structure, EUR 35 million or 7 per cent (prohibitions), EUR 15 million or 3 per cent (deployer obligations), EUR 7.5 million or 1 per cent (incorrect information). SME economic viability instruction at paragraph (6).
- Directive (EU) 2024/2853 of the European Parliament and of the Council on liability for defective products, OJ L, 18.11.2024. Classification of AI software as a product subject to strict liability.
- EIOPA Opinion on Artificial Intelligence Governance and Risk Management, EIOPA-BoS-21/001, 17 June 2021. Supervisory expectations for AI governance in the insurance sector, aligned with the AI Act framework and relevant to deployers in insurance risk pricing contexts.
- ISO form CG 40 47. Insurance Services Office exclusion endorsement narrowing general liability coverage for bodily injury and property damage arising from AI systems.
- ISO form CG 40 48. Insurance Services Office exclusion endorsement narrowing professional liability coverage for claims arising from AI systems.
- CNIL preliminary guidance on AI Act and GDPR interaction for deployers. Commission Nationale de l'Informatique et des Libertés, 2025 series. Available at cnil.fr.
- AESIA sector guidance for high-risk AI deployers. Agencia Española de Supervisión de la Inteligencia Artificial, 2026. Available at aesia.gob.es.