Article 5 is structured differently from every other provision in the EU AI Act. While the Act's high-risk obligations, GPAI requirements, and transparency duties each carry phased application dates, the prohibitions in Article 5 were the first provisions to bind all operators. There is no transition period, no grace period, and no staged rollout. Any AI system that falls within one of the eight prohibited categories is unlawful now.

Key takeaways

  • Article 5 prohibitions have been in force since 2 February 2025 and apply to both public and private operators across the EU.
  • The highest penalty tier applies: up to EUR 35 million or seven per cent of global turnover, whichever is higher.
  • The prohibitions are not affected by the Digital Omnibus proposal, which concerns only Annex III high-risk obligations.
  • Most prohibitions apply to any operator in scope; only the social scoring prohibition is limited to public authorities.

Why Article 5 sits outside the staged rollout

Regulation (EU) 2024/1689 entered into force on 1 August 2024. Article 113(1) of the Regulation sets out a staged application schedule with six dates: general application on 2 August 2026, GPAI obligations from 2 August 2025, transparency obligations from 2 August 2026, and so on. Within that schedule, Article 113(1)(a) singles out Article 5 for the earliest application date: 2 February 2025. That date fell six months after entry into force.

The rationale is set out in the legislative record. The practices listed in Article 5 were characterised during the legislative process as fundamental violations of human dignity, autonomy, and the right to non-discrimination. The co-legislators took the position that no operator required a transition period to stop conducting practices that the Union had already determined to be incompatible with its values. The six-month window was offered to allow providers and deployers to audit their portfolios and withdraw or modify any system in scope, not to continue operating prohibited systems while preparing governance frameworks.

The Digital Omnibus on AI, the Commission proposal currently in trilogue, proposes to defer the 2 August 2026 high-risk obligations by sixteen months to 2 December 2027. The proposal does not touch Article 5. No member state delegation, no Parliament rapporteur, and no Commission communication has suggested any modification to the prohibitions or their February 2025 application date. For operators reviewing compliance timelines, the Omnibus is not relevant to Article 5.

The practical consequence is that operators who have been deferring AI Act compliance work until the 2026 high-risk deadlines have been operating under binding legal obligations since February 2025 without necessarily realising it. The Article 5 audit is overdue for any enterprise that has not completed it.

The eight prohibited practices

Article 5(1) of Regulation (EU) 2024/1689 lists eight categories of prohibited AI system. Each is defined with reference to the specific mechanism of harm it targets.

Article 5(1)(a): subliminal manipulation causing harm. AI systems that deploy techniques operating below the threshold of human consciousness to materially distort a person's behaviour in a way that causes or is likely to cause that person or another person significant harm. The key elements are the subliminal mechanism and the significant harm requirement. Systems that influence behaviour through transparent persuasion, even if the persuasion is effective, are not covered. Systems that use subliminal audio cues, visual priming at thresholds below conscious perception, or other covert mechanisms to alter decision-making are covered if the resulting behaviour causes significant harm.

Article 5(1)(b): exploitation of vulnerabilities of specific groups. AI systems that exploit specific vulnerabilities of persons or groups arising from age, disability, or specific social or economic situation, through techniques that materially distort behaviour in a way that causes or is likely to cause significant harm. This prohibition does not require subliminal techniques. It applies where the system is designed to exploit a known vulnerability, such as the reduced cognitive capacity of a person with dementia, the financial desperation of a person in economic crisis, or the developmental stage of a child, to achieve behavioural outcomes that harm that person.

Article 5(1)(c): public social scoring. AI systems used by or on behalf of public authorities for the evaluation or classification of natural persons over a period of time, based on social behaviour or known, inferred, or predicted personal characteristics, where the social score leads to detrimental treatment of those persons in contexts unrelated to the original data collection or treatment that is disproportionate to its social significance. This prohibition is the only one in Article 5 that is restricted to public authorities and those acting on their behalf. Private commercial scoring systems, including creditworthiness scoring, are not covered by this prohibition, though they may be subject to other provisions of EU law.

Article 5(1)(d): real-time remote biometric identification in publicly accessible spaces for law enforcement. AI systems used for the real-time remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement, except as provided for in Article 5(2). The exceptions in Article 5(2) are narrow: targeted search for a specific missing child, prevention of a specific and imminent terrorist threat, or identification of a perpetrator or suspect of a serious criminal offence as defined in Article 5(2)(d), subject to prior judicial or administrative authorisation in all but urgent cases. These exceptions apply only to law enforcement authorities. Private operators have no available exception and cannot deploy real-time biometric identification in publicly accessible spaces.

Article 5(1)(e): biometric categorisation systems inferring sensitive attributes. AI systems that categorise natural persons individually based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. The prohibition targets the inference of these sensitive characteristics from biometric inputs, not biometric identification as such. A system that uses facial geometry to match a face against a watch-list is a different function from a system that uses facial geometry to infer political affiliation. The latter is prohibited under Article 5(1)(e).

Article 5(1)(f): emotion recognition in workplaces and educational institutions. AI systems that infer the emotions of natural persons in the workplace or educational institutions, with the exception of AI systems used for medical or safety reasons. The prohibition covers any system that reads facial expressions, vocal patterns, physiological signals, or other behavioural cues to infer an employee's emotional state or a student's emotional response. The safety and medical exceptions are narrow. A system that monitors operator alertness for safety reasons on a production line may qualify. A system that monitors employee satisfaction or engagement during meetings does not.

Article 5(1)(g): individual criminal risk prediction based on profiling. AI systems used for making risk assessments of natural persons for the purpose of assessing the risk of a natural person of committing criminal offences, based solely on the profiling of a natural person or on the assessment of their personality traits and characteristics. The qualifier "solely" is significant. The prohibition targets systems that predict criminal risk without reference to any objective, individually assessed criminal conduct. A system that weights personality traits or demographic patterns to generate a risk score without reference to specific, verifiable behaviour is prohibited. Systems that incorporate verifiable evidence of actual conduct alongside other factors are in a different position, though they remain subject to other applicable provisions of EU law.

Article 5(1)(h): retrospective facial recognition from untargeted scraping. AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or closed-circuit television footage. The prohibition applies regardless of the purpose for which the database is created and regardless of whether the resulting database is used for law enforcement or commercial purposes. Any system that systematically harvests facial images from online sources or from surveillance footage for the purpose of building a biometric dataset falls within this prohibition.

The practices with widest commercial exposure

Three of the eight prohibitions are directly relevant to the largest number of private sector operators in 2026: emotion recognition in workplaces and educational institutions (Article 5(1)(f)), biometric categorisation to infer sensitive attributes (Article 5(1)(e)), and subliminal manipulation (Article 5(1)(a)).

Emotion recognition has been a growth area for enterprise AI vendors across HR technology, e-learning, customer service, and productivity tooling. Systems that analyse video from employee meetings to score engagement, detect frustration in customer service calls to route escalations, or track student attention during online learning fall within the Article 5(1)(f) prohibition as deployed in workplaces and educational institutions. Many of these systems are marketed under neutral labels, including engagement analytics, attention measurement, or workforce sentiment tools, and operators who have purchased them through standard procurement channels may not have been informed of their regulatory status.

The critical question for each such system is whether the deployment context is the workplace or an educational institution. The same technology deployed to measure driver alertness on a commercial vehicle for safety purposes may qualify for the Article 5(1)(f) safety exception. Deployed to measure the emotional engagement of employees during a training session, it does not. The context of use, not the underlying technology, determines the prohibition's application.

Biometric categorisation to infer sensitive attributes under Article 5(1)(e) is relevant to advertising technology, content moderation, and security screening systems. Any system that takes a biometric input, including a photograph, a voice recording, or a video frame, and generates an inference about a person's religious affiliation, political orientation, or sexual orientation is prohibited. Given the proliferation of large multimodal models capable of making such inferences as a byproduct of their general functionality, operators who have deployed general-purpose AI in contexts where biometric data is processed should audit whether the model's outputs include such inferences.

Subliminal manipulation under Article 5(1)(a) is less commonly identified in enterprise AI portfolios, but it is relevant to personalisation systems, gaming platforms, and behavioural nudging tools that operate at the boundary of conscious perception. A recommendation system that exploits known psychoacoustic effects or visual temporal thresholds to influence purchase decisions is not a hypothetical; it is a function that several existing commercial platforms have explored or deployed. The significant harm requirement sets a threshold, but operators should not treat the absence of documented harm as confirmation of compliance. The prohibition attaches to systems that are likely to cause significant harm, not only those that have caused it.

Penalties and enforcement

Article 99(3) of Regulation (EU) 2024/1689 establishes the penalty ceiling for Article 5 breaches at EUR 35 million or seven per cent of the total worldwide annual turnover of the undertaking for the preceding financial year, whichever is higher. This is the highest of the three penalty tiers in the Regulation. By comparison, breaches of high-risk system obligations carry a maximum of EUR 15 million or three per cent of global turnover, and incorrect or incomplete information provided to authorities carries EUR 7.5 million or one per cent.

Enforcement is assigned to national market surveillance authorities under Article 70. Each member state designates one or more authorities to monitor compliance and investigate complaints. The European AI Office, established under Article 64, has oversight jurisdiction over GPAI model providers with systemic risk designations and coordinates cross-border enforcement. For Article 5 breaches that involve a GPAI model, both the national authority and the AI Office may have concurrent investigative interest.

A key structural feature of the penalty provisions is that the breach does not require proof of specific financial harm to an identified victim. The prohibited practice, if established, constitutes the infringement. This is consistent with the legislative characterisation of the Article 5 practices as fundamental violations of EU values rather than commercial harms. An operator who has deployed a prohibited system without causing any documented damage to any specific individual is still in breach of Article 5 and subject to the maximum penalty tier.

As of May 2026, national supervisory authorities are at varying stages of readiness. Several member states have designated their national market surveillance authorities and begun establishing AI oversight functions. The formal enforcement machinery is operational. Operators who treat the absence of enforcement action to date as evidence that Article 5 is not yet live are misjudging the regulatory position.

What operators should do now

The starting point is an inventory of AI systems in the operator's portfolio. The Article 5 audit does not require a comprehensive AI Act compliance programme. It requires a focused review of whether any system in scope falls within one of the eight prohibited categories. The review should be conducted system by system, with reference to the specific functions the system performs, the data it processes, and the context in which it is deployed.

For each system that touches emotion data, biometric data, or behavioural influence, the operator should document the deployment context. An emotion recognition system that would be prohibited in a workplace training session may be permissible in a clinical setting under the Article 5(1)(f) safety and medical exception. The documentation of that context forms part of the operator's compliance record under Article 26 of the AI Act, which requires deployers of high-risk AI systems to maintain a compliance file. Even where Article 5 is the relevant provision rather than the high-risk regime, the practice of maintaining a contemporaneous record of the deployment context, the system's functions, and the legal basis on which the operator determined the prohibition did not apply is sound risk management.

Operators who have deployed systems procured from third-party providers should review the contractual representations made by those providers about compliance with applicable EU law. A deployer who relies on a provider's assurance that a system is not prohibited under Article 5, where that assurance is incorrect, may still face enforcement exposure. The regulatory obligation runs to the deployer, not only to the provider. Contractual indemnities from providers provide recourse in the right direction after liability is established, but they do not eliminate the deployer's primary obligation.

Where a system is identified as falling within a prohibited category, the operator's options are limited. Article 5 prohibitions are not subject to exemptions by individual operators, national authorities, or even the European Commission. There is no authorisation process for a system that falls within Article 5(1)(a) through (g). The system must be withdrawn from use. For real-time biometric identification under Article 5(1)(h), law enforcement authorities in member states may seek authorisation for the specific exceptions listed in Article 5(2). That procedure is not available to private operators.

The compliance file that Article 26 requires high-risk system deployers to maintain should be structured to address Article 5 as a threshold matter, before the high-risk regime analysis. Operators who have completed the Article 26 compliance file work described in our operator obligations guide will find that it already provides the framework for documenting the Article 5 review. The two analyses share the same documentary architecture: system identification, function mapping, deployment context, and the legal basis for the operator's compliance conclusion.

For the enforcement architecture that will administer these obligations, including the structure of national supervisory authorities and the AI Office's cross-border coordination role, see our briefing on the EU AI Act's enforcement architecture.

For the scoring methodology that European AI providers are using to assess their certification posture against the full stack of EU AI obligations, see Agent Certified's full methodology.

Frequently asked questions

Which Article 5 prohibitions have been in force since February 2025?

Article 5(1)(a) through (h) of Regulation (EU) 2024/1689 have all been in force since 2 February 2025, when the first application date of the AI Act arrived. The prohibitions are the only AI Act provisions that applied before the August 2026 high-risk deadline. They cover: subliminal manipulation causing harm, exploitation of vulnerabilities of specific groups, public social scoring by public authorities or on their behalf, real-time remote biometric identification in public spaces for law enforcement (except in narrow named exceptions), biometric categorisation to infer sensitive characteristics, emotion recognition in workplaces and educational institutions (except safety or medical purposes), individual criminal risk prediction based on profiling alone, and retrospective facial recognition using untargeted scraping.

Does Article 5 apply to private companies or only to public authorities?

Most prohibitions in Article 5 apply to any operator, public or private. The public social scoring prohibition (Article 5(1)(c)) is specifically limited to public authorities and persons acting on their behalf. The real-time biometric identification prohibition (Article 5(1)(h)) applies to law enforcement use in publicly accessible spaces. The remaining prohibitions, including subliminal manipulation, exploitation of vulnerabilities, emotion recognition in workplaces and education, individual criminal risk prediction, and biometric categorisation, apply to any provider or deployer operating a covered system in the EU, regardless of whether they are a public body.

What is the enforcement consequence for an Article 5 breach?

Breaches of the Article 5 prohibitions carry the highest penalty tier under the EU AI Act: up to EUR 35 million or seven per cent of global annual turnover for the preceding financial year, whichever is higher. These penalties are administered by national market surveillance authorities and, for GPAI models with systemic risk, the AI Office. The breach does not require financial damage to a specific person. The prohibited practice itself, if established, is the infringement.

Are there exceptions to the real-time biometric identification prohibition?

Yes. Article 5(1)(h) provides three narrow exceptions for law enforcement use: targeted search for a missing child, imminent threat of a terrorist attack, or identification of a person whose offence carries a penalty of at least four years' imprisonment and who is a fugitive from justice. Each exception requires prior judicial or administrative authorisation, except in urgent cases. The exceptions are for law enforcement only. Private operators have no access to these exceptions and may not deploy real-time remote biometric identification systems in publicly accessible spaces.

Does the Digital Omnibus proposal affect Article 5?

No. The European Commission's Digital Omnibus proposal, which is in trilogue as of May 2026, proposes to delay the Annex III high-risk obligations from 2 August 2026 to 2 December 2027. It does not propose any change to the Article 5 prohibitions, which have been in force since 2 February 2025. The prohibitions are the floor of the AI Act, not a deadline, and they are not subject to the proposed extension.

References

  1. Regulation (EU) 2024/1689, Article 5(1)(a)-(h), prohibited AI practices.
  2. Regulation (EU) 2024/1689, Article 5(2)-(6), conditions for law enforcement exceptions to the real-time biometric identification prohibition.
  3. Regulation (EU) 2024/1689, Article 99(3), maximum penalties for breaches of Article 5 prohibitions.
  4. Regulation (EU) 2024/1689, Article 113(1), entry into force and phased application dates, including Article 5 application from 2 February 2025.
  5. European Commission, Proposal for a Regulation amending Regulation (EU) 2024/1689 as regards the dates of application of certain provisions (Digital Omnibus on AI), COM(2025), Articles 5 not affected.
  6. European AI Office, Guidance on prohibited AI practices under Article 5 of Regulation (EU) 2024/1689, 2025.