Four months before the main deployer obligations of Regulation (EU) 2024/1689 enter into application, the enforcement architecture is largely in place. This analysis maps the AI Office, the European Artificial Intelligence Board, the national market surveillance authorities, and the data protection authorities as enforcement actors, and explains how the penalty tiers function in practice.

Key takeaways

  • The AI Office sits within the European Commission and is the competent authority for general-purpose AI models. It does not handle routine deployer enforcement.
  • National market surveillance authorities, designated by each member state under Article 70, are the bodies that will conduct deployer enforcement inquiries in practice.
  • Data protection authorities are designated as market surveillance authorities for specific high-risk use cases involving personal data, including biometrics and law enforcement AI.
  • Article 99 creates three penalty tiers. Deployer violations under Article 26 fall in the second tier: up to EUR 15 million or 3 per cent of worldwide annual turnover.
  • The enforcement timeline is staggered. Article 5 prohibitions applied from 2 February 2026. Main deployer obligations apply from 2 August 2026. Systems in regulated products face a further deferral to 2 August 2027.

The two-level structure

The enforcement architecture of the EU AI Act is based on a division of responsibility between the European and national levels. It is not as clean as a simple hierarchy. The AI Office has powers that extend to general-purpose AI model providers anywhere in the world whose models are placed on the European market. National market surveillance authorities have powers over high-risk AI providers and deployers within their territory. The European Artificial Intelligence Board coordinates between the two levels and provides opinions and recommendations, but it does not itself take enforcement action.

For an operator deploying a high-risk AI system in one or more EU member states, the most immediately relevant authority is national. The operator should identify the designated market surveillance authority in each member state where it deploys, understand what that authority has said publicly about its approach to AI Act enforcement, and assess whether the authority has additional sectoral powers in the operator's industry.

The AI Office

The AI Office was established within the European Commission by Commission Decision of 24 January 2024. Articles 88 through 94 of Regulation (EU) 2024/1689 define its powers and responsibilities.

The AI Office's primary mandate is oversight of general-purpose AI models and general-purpose AI models posing systemic risk. Articles 51 through 63 set out the obligations for providers of such models: transparency documentation, copyright policy, evaluation of systemic risk, and, for the highest-capability systems, red-team testing and incident reporting. The AI Office can investigate providers of general-purpose AI models, issue decisions requiring remediation, and impose financial penalties of up to EUR 15 million or 3 per cent of worldwide annual turnover.

The AI Office is also the coordinating body for the AI Act's enforcement across member states, the body that maintains the EU database of high-risk AI systems under Article 71, and the forum where the European Artificial Intelligence Board convenes. In this sense it sits above the national level, but its direct enforcement powers are bounded to the general-purpose AI model provider category.

For most deployers of existing commercial AI products, the AI Office is not the enforcement risk. The entities in the AI Office's direct enforcement scope are the large model providers, not the businesses deploying those models downstream. The downstream deployers are supervised by national authorities.

National market surveillance authorities

Article 70(1) requires each member state to designate one or more national competent authorities responsible for market surveillance and enforcement of the AI Act within their territory. Each authority must be notified to the Commission and published in the Official Journal. As of April 2026, the majority of member states have designated their primary authority, with several designating joint authorities where the use case spans multiple regulatory domains.

The designated authorities vary significantly by member state. In Germany, the Bundesnetzagentur has been designated for general enforcement alongside sector-specific bodies. In France, the Autorité de régulation des communications électroniques, des postes et de la distribution de la presse and the Commission nationale de l'informatique et des libertés share responsibilities. In the Netherlands, the Autoriteit Persoonsgegevens has been designated for AI systems involving personal data processing. Each member state's designation reflects the existing regulatory architecture, which means the AI Act is being layered onto a fragmented national supervisory landscape.

The practical consequence for deployers operating across multiple member states is that compliance needs to be defensible to multiple authorities simultaneously. The substantive requirements are uniform under the Regulation, but the enforcement style, documentation preferences, and initial inquiry triggers vary by authority. Early engagement with the national authority in each operating jurisdiction, through public consultations, guidance requests, or informal dialogue, is the most effective way to understand enforcement priorities before they crystallise into formal proceedings.

Data protection authorities as market surveillance bodies

Article 70(2) designates data protection authorities as market surveillance authorities for AI systems used for biometric identification, law enforcement, and a number of other high-risk categories involving personal data. This is a deliberate architectural choice: the regulation's authors recognised that enforcement of AI obligations and data protection obligations would overlap significantly in the most sensitive use cases.

The European Data Protection Supervisor and national data protection authorities have existing investigatory and enforcement powers under the GDPR, and these powers are now extended to cover the AI Act obligations in their designated categories. A deployer that receives a data subject complaint involving an AI system used in employment screening or credit scoring should expect the data protection authority to arrive with both sets of powers.

Article 70 also provides for coordination between data protection authorities and other designated market surveillance authorities where the same AI system falls under both regimes. The coordination mechanism is not yet fully developed in practice, and early AI Act enforcement cases will likely reveal gaps in it. Deployers should document their compliance position under both regimes and maintain files that can be produced to either authority.

The European Artificial Intelligence Board

Article 65 establishes the European Artificial Intelligence Board, composed of one high-level representative from each member state's national competent authority and the European Data Protection Supervisor as a non-voting observer. The AI Office chair acts as secretariat. The Board provides opinions, guidelines, and recommendations. It does not take enforcement action.

The Board's most significant function for deployers is its role in developing guidance on the application of the AI Act's provisions. Article 96 tasks the Commission with issuing guidelines on the practical implementation of the classification rules, the risk management requirements, and the transparency obligations. The Board contributes to those guidelines and provides opinions on the Commission's drafts. Deployers should monitor Board opinions and Commission guidelines as they are published, because they are the documents that national authorities will cite when explaining their enforcement decisions.

The enforcement timeline

The AI Act entered into force on 1 August 2024. Its provisions apply in stages. The prohibition on the AI practices listed in Article 5, including social scoring systems, real-time biometric surveillance in public spaces by law enforcement, and subliminal manipulation techniques, applied from 2 February 2026.

The main body of obligations, covering providers and deployers of high-risk AI systems under Chapters III and IV, applies from 2 August 2026. This is the date on which Articles 9 through 17, Article 26, and the associated penalty provisions become operative. High-risk AI systems embedded in the products listed in Annex I (machinery, medical devices, toys, and related categories) have a further deferral to 2 August 2027.

The general-purpose AI model obligations in Chapter V applied from 2 August 2025, with the AI Office's codes of practice for high-capability models developed during the transition period. Those codes are the primary compliance framework for the large model providers.

How the penalty calculation works

Article 99 sets three tiers of maximum penalty. The calculation uses the higher of a fixed amount or a percentage of worldwide annual turnover.

The first tier applies to violations of the Article 5 prohibitions: up to EUR 35 million or 7 per cent of worldwide annual turnover. A business with EUR 100 million in global revenue faces a maximum penalty of EUR 7 million for a prohibition violation. A business with EUR 1 billion in global revenue faces a maximum of EUR 70 million.

The second tier applies to violations of obligations applicable to providers and deployers of high-risk AI systems, including the Article 26 deployer obligations: up to EUR 15 million or 3 per cent of worldwide annual turnover. For a small European business with EUR 5 million in global revenue, the ceiling is EUR 15 million, because the fixed amount is higher than the percentage. For a large enterprise with EUR 1 billion in global revenue, the ceiling is EUR 30 million.

The third tier, at up to EUR 7.5 million or 1 per cent, covers the provision of incorrect, incomplete, or misleading information to notified bodies or authorities. This tier is significant for deployers who make representations about their AI compliance position during a conformity assessment or supervisory inquiry. A document submitted to a market surveillance authority that misstates the deployer's risk management system triggers this tier at minimum, and potentially the second tier if the misstatement concealed an underlying obligation breach.

Article 99(3) adds an important instruction: penalties on EU institutions, bodies, and agencies are capped at EUR 1.5 million for the first tier and EUR 750,000 for the second tier. This acknowledges the different accountability context for public bodies, but it does not reduce the pressure on private sector deployers.

Article 99(6) instructs supervisors to take account of the economic viability of SMEs and startups when setting penalty levels. It is not an exemption, but it is a material factor that a competent authority must address in its penalty decision. A deployer that can demonstrate genuine compliance effort, a credible remediation plan, and cooperation with the supervisory inquiry is substantially less exposed than one that has ignored the regime entirely.

What triggers an enforcement inquiry

The AI Act does not specify a mandatory complaints mechanism at the deployer level, but Article 85 provides for confidential reporting of AI Act violations to the relevant national competent authority. The categories of people who can make such reports include employees, contractors, and individuals affected by an AI system's outputs. Data protection authorities have existing channels for complaints about AI-related data processing.

An enforcement inquiry can be triggered by a complaint, by a serious incident notification under Article 26(5), by a market surveillance authority's own monitoring activities, or by a cross-border coordination request from another member state's authority. The Article 26(5) serious incident reporting obligation is the mechanism most likely to trigger an inquiry at the deployer level: a deployer who correctly reports a serious incident is also inviting the supervisor to examine the governance around the system that caused it.

The interaction between the serious incident obligation and enforcement is not contradictory. A deployer who reports correctly and demonstrates a credible response is in a materially better position than one who fails to report and is discovered later through other means. Article 99(5) instructs supervisors to take account of cooperation with the investigation and corrective action taken when setting penalties. The deployer who reports, remediates, and cooperates is building the mitigating record that reduces the penalty calculation. The deployer who conceals is building the aggravating record that increases it.

Implications for compliance and insurance

The enforcement architecture has direct implications for how deployers should think about their compliance file and their insurance coverage. The compliance file described in the Article 26 operator obligations guide is also the document an authority will ask for first in an enforcement inquiry. A deployer who holds the five minimum documents, a risk record, an oversight register, an instructions-for-use map, a logging schedule, and an incident protocol, is demonstrating the cooperative and diligent posture that Article 99 instructs supervisors to reward.

For the insurance implication, the enforcement architecture creates a regulatory indemnity exposure that standard policies are not written to cover. An authority conducting an inquiry may issue remediation orders, impose fines, or require a fundamental rights impact assessment to be produced. The legal costs of defending the inquiry, the cost of producing the required documentation, and the fines themselves are all distinct heads of loss. The coverage question is which of them the AI policy covers, and whether the policy's regulatory penalty indemnity extends to EU AI Act proceedings. For context on how coverage frameworks address this, see what AI agent insurance will cover on the sister site.

Frequently asked questions

Which authority enforces the EU AI Act against deployers?

For most high-risk AI deployers, enforcement sits with the national market surveillance authority designated by each member state under Article 70 of Regulation (EU) 2024/1689. The AI Office is the competent authority for general-purpose AI models and systemic risk oversight, but it does not typically handle deployer-level enforcement of the Article 26 obligations. Data protection authorities also serve as market surveillance authorities for certain high-risk use cases involving personal data.

When does EU AI Act enforcement begin?

The prohibition provisions in Article 5 have applied since 2 February 2026. The main obligations for providers and deployers of high-risk AI systems, including Articles 9 through 17 and Article 26, apply from 2 August 2026. Penalties under Article 99 become operative on the same date. A second phase covering AI systems embedded in regulated products applies from 2 August 2027.

What is the difference between the AI Office and national market surveillance authorities?

The AI Office is a European Commission body responsible for oversight of general-purpose AI model providers, systemic risk assessment, and coordination across member states. National market surveillance authorities are responsible for enforcement against providers and deployers of high-risk AI systems within their territory. Both levels coordinate through the European Artificial Intelligence Board established under Article 65.

Can a deployer in one member state be pursued by another member state's authority?

The primary enforcer is the authority in the member state where the deployer is established or, if not established in the EU, where the affected persons are located. Articles 74 and 75 set out the market surveillance coordination mechanisms. Where a deployer's AI system affects persons in multiple member states, the authorities of those states can coordinate action and request information from the authority with primary jurisdiction.

What are the penalty tiers under Article 99 and who can they apply to?

Article 99 sets three tiers. The first tier, up to EUR 35 million or 7 per cent of global turnover, applies to violations of the Article 5 prohibited practices. The second tier, up to EUR 15 million or 3 per cent, applies to provider and deployer obligation violations including Article 26. The third tier, up to EUR 7.5 million or 1 per cent, applies to providing incorrect or misleading information to notified bodies or authorities. All tiers can apply to deployers for violations within their scope.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 12.7.2024.
  2. Article 5, Regulation (EU) 2024/1689, prohibited AI practices. Applied from 2 February 2026.
  3. Article 65, Regulation (EU) 2024/1689, establishing the European Artificial Intelligence Board.
  4. Article 70, Regulation (EU) 2024/1689, designation of national competent authorities and market surveillance authorities.
  5. Articles 74 and 75, Regulation (EU) 2024/1689, cross-border cooperation and mutual assistance between market surveillance authorities.
  6. Articles 88 to 94, Regulation (EU) 2024/1689, AI Office powers and responsibilities.
  7. Article 99, Regulation (EU) 2024/1689, penalties including the three-tier fine structure.
  8. European Commission Decision of 24 January 2024 establishing the AI Office within the European Commission.
  9. Regulation (EU) 2016/679 (General Data Protection Regulation), coordination with AI Act enforcement under Article 70(2).