The Act does not use the word "operator" in its enacting terms. It uses "deployer," and it draws the line at whoever is using an AI system under their own authority in the course of a professional activity. For the purposes of this publication, operator and deployer are the same figure, and the obligations described below attach to that figure regardless of whether the underlying model was built in house, licensed from a provider, or accessed through an API.

Who is an operator

Article 3(4) defines a deployer as any natural or legal person, public authority, agency, or other body using an AI system under its authority. The definition excludes purely personal, non professional activity. Two consequences follow. First, a sole trader running an autonomous agent to qualify sales leads is an operator. Second, a department within a larger group that uses an agent procured centrally is an operator for its own use, even though the contract with the provider was signed elsewhere.

The Act draws a second line between providers and deployers in Article 25, on the reclassification of deployers as providers. Any substantial modification of a high risk AI system, any rebranding under the deployer's own name, or any repurposing beyond the system's intended use, can convert the deployer into a provider with the full upstream obligations. The practical implication is that prompt layer customisation, retrieval augmentation, or fine tuning can quietly change who is on the hook.

Article 26 obligations

Article 26 is the operational core of the deployer regime for high risk systems. The obligations run as follows.

  1. Use in accordance with instructions. Deployers must take appropriate technical and organisational measures to use the system within the parameters set by the provider's instructions for use (Art. 26(1)).
  2. Human oversight assignment. Deployers must assign human oversight to natural persons who have the necessary competence, training, authority, and support to exercise it (Art. 26(2)).
  3. Input data verification. To the extent the deployer exercises control over input data, they must ensure it is relevant and sufficiently representative in view of the system's intended purpose (Art. 26(4)).
  4. Monitoring and incident duty. Deployers must monitor operation, report serious incidents to the provider and, where applicable, to the market surveillance authority, and suspend use where monitoring reveals a risk within the meaning of Article 79 (Art. 26(5)).
  5. Log retention. Automatically generated logs must be kept for a period appropriate to the intended purpose, and at least six months unless Union or national law provides otherwise (Art. 26(6)).
  6. Worker information. Where the system is used in an employment context, deployers must inform worker representatives and affected workers before putting it into service (Art. 26(7)).
  7. Public sector registration. Deployers that are public authorities must register their use in the EU database of high risk systems before or at the time of deployment (Art. 26(8)).

None of these duties are delegable through contract. A provider's terms of service cannot remove them, and an indemnity cannot convert a compliance failure into a recoverable commercial loss. They are owed to the supervisor and, indirectly, to the persons affected by the system's outputs.

Fundamental rights impact assessment

Article 27 adds a further obligation on a narrower class of deployers. Public bodies, private operators providing public services, and any deployer using a high risk system listed in specific points of Annex III must complete a fundamental rights impact assessment before first deployment. The assessment covers the process, the categories of persons likely to be affected, the risks of harm, the oversight measures in place, and the mitigation plan if risks materialise.

Unlike the provider's conformity assessment under Article 43, the FRIA is a living document. It must be updated whenever any of its elements changes materially, and it must be made available to the supervisor on request. Several national data protection authorities have indicated that they will treat the FRIA as the first document they ask for in an enforcement inquiry.

Enforcement architecture

Enforcement sits with two sets of authorities. At the Union level, the AI Office, the European Artificial Intelligence Board, and the Commission coordinate on general purpose AI and on cross border matters. At the member state level, each country designates one or more competent authorities to carry out market surveillance under Chapter IX of the Act.

Operators should expect the first inquiries to arrive from existing supervisors extended to cover the Act: financial regulators for AI used in credit scoring, labour inspectorates for AI used in employment, consumer protection authorities for AI in education, and data protection authorities across all sectors where personal data is processed. The Act does not displace the GDPR, and coordination between the two regimes is explicitly foreseen in Article 70.

Penalties

Article 99 sets the fine ceilings. Non compliance with the prohibitions in Article 5 carries the highest exposure, at up to EUR 35 million or 7 per cent of worldwide annual turnover, whichever is higher. Most operator failures fall under the second tier: up to EUR 15 million or 3 per cent of turnover for breaches of the obligations that apply to providers and deployers of high risk AI, including the Article 26 duties described above. The provision of incorrect, incomplete, or misleading information to notified bodies and competent authorities sits at a third tier of up to EUR 7.5 million or 1 per cent of turnover.

Supervisors must take into account the nature, gravity, and duration of the infringement, the size and annual turnover of the operator, whether other authorities have already imposed penalties for the same facts, and whether the operator cooperated with the investigation. SMEs and startups benefit from a specific instruction in Article 99(6) that penalties be set with regard to their economic viability.

Timeline

The Act entered into force on 1 August 2024. The prohibitions in Article 5 have applied since 2 February 2025. The rules on general purpose AI models and the first layer of governance have applied since 2 August 2025. The operator regime for high risk systems listed in Annex III, together with most of the provisions relevant to deployers, begins to apply on 2 August 2026. A small number of obligations tied to systems embedded in regulated products under Union harmonisation legislation are deferred to 2 August 2027.

This publication treats 2 August 2026 as the hard deadline for operator readiness. Everything described in this briefing must be in place on that date for any high risk system already in production and must be in place from day one for any system deployed afterwards.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence, OJ L, 12.7.2024.
  2. Article 3, definitions. In particular Article 3(4), deployer.
  3. Article 14, human oversight. Read together with the delegated elements in Annex IV and Annex VIII.
  4. Article 25, on the reclassification of deployers as providers through substantial modification or rebranding.
  5. Article 26, operator obligations. The full text of the seven sub paragraphs discussed above.
  6. Article 27, fundamental rights impact assessment for deployers of certain high risk AI systems.
  7. Article 79, procedures at national level for dealing with AI systems presenting a risk.
  8. Article 99, penalties. Including the three tier structure and the specific SME guidance.
  9. Recitals 53 to 65, the interpretive framework for high risk classification and operator duties.