Article 50 sits in Chapter IV of the AI Act under the heading "Transparency obligations for providers and deployers of certain AI systems." It is not limited to high-risk systems. Its reach is broader, its duties are concrete, and its deadline is the same as the rest of the operator regime: 2 August 2026. Deployers who have been focused solely on Article 26 compliance may find Article 50 is the provision they are least prepared for.

Key takeaways

  • Article 50(1) requires providers to design AI systems that interact with humans so that users know they are talking to an AI. Deployers carry a co-obligation not to suppress that disclosure at deployment time. The exception covers law enforcement systems authorised by law with appropriate safeguards.
  • Article 50(2) requires providers of generative AI systems to mark synthetic audio, image, video, and text outputs in a machine-readable format detectable as artificially generated. The AI Office's draft Code of Practice endorses a multi-layered approach: C2PA Content Credentials plus invisible watermarking plus visible user-facing disclosure.
  • Article 50(3) requires deployers of emotion recognition and biometric categorisation systems to inform every natural person exposed to the system of its operation. The obligation is separate from, and in addition to, GDPR consent requirements for biometric data processing.
  • Article 50(4) requires deployers of deepfake systems to disclose that content is artificially generated, and requires deployers generating AI text on matters of public interest to label that text as AI-produced. Artistic and satirical works carry a limited disclosure duty rather than a full prohibition.
  • Article 50(5) sets a delivery standard: all disclosures under Article 50 must reach the person at the latest at the first interaction or exposure, in a clear and distinguishable manner, conforming to accessibility requirements.
  • Codes of practice adopted under Article 56 create a rebuttable presumption of compliance. The AI Office is coordinating a labelling code with a final text expected before June 2026. Adherence is voluntary but materially reduces enforcement risk.
  • Penalties for Article 50 breaches fall under the second tier of Article 99, up to EUR 15 million or 3 per cent of worldwide annual turnover, whichever is higher. National market surveillance authorities lead enforcement; the AI Office coordinates across Member States.

What Article 50 actually says, paragraph by paragraph

Article 50 appears in Chapter IV, which sits outside the high-risk classification system of Chapter III. This placement matters: the transparency obligations in Article 50 apply on the basis of what a system does in its interaction with people, not on the basis of whether it is classified as high-risk. A general-purpose chatbot that is not listed in Annex III may still be squarely within Article 50.

Article 50(1) addresses the most basic disclosure: users interacting with AI must know they are doing so. The obligation falls on providers to design and develop AI systems intended for direct interaction with natural persons in such a way that the persons concerned are informed that they are interacting with an AI system. The standard is objective. The test is whether a reasonably well-informed, observant, and circumspect person would be aware of the AI nature of the interaction. Where that is obvious from context, the disclosure obligation does not arise. Where it is not obvious, the obligation is unconditional.

Article 50(2) addresses the content layer rather than the interaction layer. Providers of AI systems that generate synthetic audio, image, video, or text content must ensure outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated. The marking obligation is subject to technical feasibility and the state of the art. The standard is not a binary pass or fail but an obligation to implement what is technically achievable, calibrated against what leading practitioners in the field are doing. In 2026, with C2PA Content Credentials widely available and invisible watermarking deployed at scale by the major model providers, the state of the art is sufficiently advanced that failure to implement any technical marking will be difficult to defend.

Article 50(3) switches from providers to deployers. Deployers that use emotion recognition systems or biometric categorisation systems must inform the natural persons exposed to the system of its operation. This obligation applies regardless of whether the system is high-risk. An employer using an AI tool that analyses employee facial expressions during video calls is a deployer subject to Article 50(3) irrespective of how the tool is classified under Article 6.

Article 50(4) contains two sub-obligations. The first applies to deployers of AI systems that generate or manipulate image, audio, or video content constituting a deep fake: they must disclose that the content has been artificially generated or manipulated. The second applies to deployers of AI systems generating text published on matters of public interest: the text must be labelled as AI-produced. For this second sub-obligation, an exception exists where the content has been subject to human editorial control and the deployer assumes editorial responsibility in a manner that does not distort reality.

Article 50(5) sets the timing and format rule for all disclosures under paragraphs 1 through 4. Information must be provided at the latest at the time of the first interaction or exposure, in a clear and distinguishable manner, and in conformity with applicable accessibility requirements. The "clear and distinguishable" standard excludes disclosures buried in terms of service, displayed in small print, or shown after the interaction has already begun.

Article 50(6) is a non-override clause. The obligations in Article 50 apply without prejudice to the requirements of Chapter III and to other Union or national law on transparency. They add to the existing transparency framework rather than replacing any part of it.

The four disclosure duties in practice

First duty: notifying users of AI interaction (Article 50(1))

The paradigm case is a customer service chatbot. If the chatbot presents itself as a human, or if its AI nature is not obvious from context, the deployer must ensure a disclosure is presented before or at the start of the conversation. The disclosure must be affirmative: a general notice in the footer of a website that the organisation uses AI tools does not satisfy Article 50(1) as applied to a specific conversational interface.

The obligation runs first to the provider. A provider who builds a chatbot must engineer the disclosure into the system. But it then also runs to the deployer. A deployer who customises a white-labelled chatbot product and removes the AI disclosure, or who configures it to mimic a human persona without disclosure, is independently in breach of Article 50(1) even if the provider's underlying system was compliant.

Voice assistants, automated telephone systems, and AI agents that send emails or messages on behalf of a person are all within the scope of Article 50(1). The obligation is not limited to real-time text interfaces. Any AI system designed to create the impression of a direct interaction with a human falls within the provision.

Second duty: machine-readable marking of generated content (Article 50(2))

Article 50(2) is technically the most demanding provision. It requires that the machine-readable marking be effective, interoperable, robust, and reliable as far as technically feasible. Each qualifier has practical content. Effective means the marking enables detection. Interoperable means the marking can be read by detection tools from multiple providers, not only by proprietary systems. Robust means the marking survives common post-processing operations such as format conversion, compression, and mild editing. Reliable means it produces consistent results across different detection contexts.

The AI Office's draft Code of Practice, with its first version published on 17 December 2025 and its second draft in March 2026, identifies three technical layers that together satisfy the Article 50(2) standard. The first layer is secured metadata embedding, using the C2PA Content Credentials specification. C2PA creates a cryptographically signed provenance record attached to the file, containing information about its origin and any AI processing applied. The second layer is invisible watermarking, which embeds a signal in the content itself, separate from the metadata, providing resilience where metadata is stripped. The third layer is visible user-facing disclosure, which serves the human-readable side of the requirement.

The Code makes clear that no single layer is sufficient on its own. A visible label without a machine-readable signal does not meet Article 50(2), because the machine-readable requirement is independent of what is visible to the user. A C2PA metadata tag without any watermark may fail the robustness criterion where metadata is routinely stripped in downstream processing. Providers who supply generative AI outputs to deployers should document which technical measures are in place and confirm that the combination meets the multi-layer standard.

Third duty: emotion recognition and biometric categorisation notification (Article 50(3))

Emotion recognition systems are AI systems that infer or predict the emotional states of natural persons from biometric data, including facial expressions, voice tone, gait, or other physiological signals. Biometric categorisation systems are those that assign natural persons to specific categories based on biometric data, such as ethnicity, political opinion, religious belief, or sexual orientation, where such inference is possible from the biometric source.

Both categories of system are restricted under Article 5 in specific contexts. Where their use is not prohibited, deployers face the Article 50(3) notification duty. The duty is to inform the natural persons exposed to the system of its operation. This includes disclosing that such a system is in operation, its general purpose, and the categories of data processed, at minimum sufficient to enable a reasonable person to understand what is being observed or inferred.

Article 50(3) applies at the point of exposure, not only at a point of contractual agreement. An employer cannot satisfy the obligation solely by including a paragraph in an employment contract signed at onboarding if the system is later installed in a meeting room without additional notification. The obligation to inform attaches to the deployment context as well as to any pre-contractual relationship.

The deployer must also comply with relevant data protection law. Where the emotion recognition or biometric categorisation system processes biometric data for the purpose of uniquely identifying natural persons, Article 9(1) of the GDPR prohibits the processing unless one of the Article 9(2) conditions applies. The notification required by Article 50(3) is necessary but not sufficient for GDPR compliance. Both regimes apply in parallel.

Fourth duty: deepfake disclosure and AI-generated public-interest text labelling (Article 50(4))

Article 50(4) covers two distinct scenarios that share a common rationale: preventing AI-generated content from misleading its audience about its origins.

The deepfake disclosure requirement applies to deployers who use AI to generate or manipulate image, audio, or video content that appreciably resembles existing persons, places, objects, or events, and that would falsely appear to a person to be authentic or truthful. The disclosure that content has been artificially generated or manipulated must be communicated to the audience in a manner appropriate to the medium and the context of publication.

The public-interest text labelling requirement is separate. It targets deployers who generate AI text for publication on matters of public interest. These include news articles, regulatory commentary, political analysis, financial market reports, and public health information. The obligation is to label the text as AI-produced. Where a natural person exercises substantive editorial control over the content, and assumes editorial responsibility for the publication such that the content is not a direct reproduction of an AI output, the obligation does not apply.

The editorial control exception is narrower than it might appear. A publisher who uses an AI system to draft articles and then applies light copy-editing does not qualify. The editorial responsibility exception requires genuine editorial accountability, the kind that would exist in a regulated media context, not mere review for formatting errors.

Who carries each duty: provider versus deployer

Article 50 distributes obligations across the supply chain in a way that differs from the Article 26 structure. The following mapping sets out the primary duty-bearer for each paragraph.

Article 50(1): Provider designs disclosure into the system. Deployer must not remove or suppress it. Both are addressable if the disclosure is absent at the point of user interaction.

Article 50(2): Provider is solely responsible for technical marking. The machine-readable signal must be embedded in the output at the point of generation. A deployer who receives a generated output and publishes it has no independent technical obligation under Article 50(2), but the provider's failure to mark the output may create a gap in the deployer's own Article 50(4) compliance where that output constitutes a deepfake or public-interest text.

Article 50(3): Deployer is solely responsible. The obligation is to inform persons exposed in the deployer's own operating environment.

Article 50(4): Deployer is solely responsible for disclosure in both the deepfake and the public-interest text scenarios. A deployer cannot discharge this obligation by pointing to a technical marking applied by the provider. The visible, user-facing disclosure is the deployer's duty.

The provider and deployer are not insulated from each other's failures. Where a provider has not implemented technical marking under Article 50(2), the deployer may be in a worse enforcement position on Article 50(4) because it cannot point to a provenance record to corroborate its own disclosure. Supply chain agreements for AI-generated content should address Article 50 compliance allocations explicitly.

How to label in practice: C2PA Content Credentials, watermarking, and visible disclosure UX

The C2PA (Coalition for Content Provenance and Authenticity) specification creates a standard for embedding cryptographically signed provenance information into media files. A C2PA Content Credential is a manifest attached to the file containing a record of its origin, any AI generation or manipulation steps applied, and the identity of the software or model used. The manifest is signed with a certificate that can be verified by any C2PA-compatible detection tool. Where the file is later altered, the manifest signature breaks, flagging the modification.

Invisible watermarking operates at the pixel, frequency, or token level depending on whether the content is image, audio, or text. An imperceptible signal is embedded in the content itself. Unlike metadata, the watermark survives common format conversions and compression. It is detectable by dedicated scanning tools even where the C2PA manifest has been stripped. The two techniques are complementary: C2PA provides detailed, verifiable provenance; watermarking provides signal resilience.

For visible disclosure, the AI Office's draft code identifies three user-facing patterns: an inline label presented at the point of publication ("This article was generated with AI assistance"), an icon or badge in the interface near the content, and a click-through or hover disclosure linking to further detail. The choice of pattern depends on the medium. For conversational interfaces, the disclosure must appear before or at the start of the conversation. For published images or videos, the label must be displayed with the content, not only in accompanying text. For audio content, a verbal disclosure at the start of the audio meets the requirement in contexts where visual labelling is not feasible.

The Article 50(5) requirement that disclosure arrive at the latest at the first interaction or exposure sets a hard timing rule. Disclosures scheduled to appear after an initial user engagement, or presented only in a settings menu, do not satisfy the "at the latest" standard.

The exceptions

Article 50 contains three categories of exception, each with a different scope.

The law enforcement exception appears in paragraphs 1, 2, 3, and 4. It covers AI systems used to detect, prevent, or investigate criminal offences where such use is authorised by law and subject to appropriate safeguards for the rights and freedoms of third parties. The exception is narrow in two respects. First, it requires a legal basis: an organisation cannot invoke it on the basis of internal policy. Second, it requires appropriate safeguards: the exception is not a blanket release from Article 50 for any system used in a security context, only for those operating within a properly constituted legal and oversight framework.

The assistive editing exception in Article 50(2) covers systems functioning as assistive editing tools, for example autocorrect, grammar checking, or audio enhancement, as well as cases where an AI system does not substantially alter content. A spell-checker does not need to produce a machine-readable provenance record for every document it processes. A system that generates a substantially new image from a text prompt does.

The artistic and satirical exception in Article 50(4) limits the disclosure obligation for deepfake content that forms part of an evidently artistic, creative, satirical, fictional, or analogous work. In these contexts, the obligation is reduced to disclosing the existence of artificially generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work. A clearly labelled satirical film may carry a production notice rather than a frame-by-frame disclosure. The word "evidently" is load-bearing: the artistic or satirical nature of the work must be apparent to its audience, not merely asserted by the deployer.

Codes of practice and the AI Office role

Article 56 of Regulation (EU) 2024/1689 authorises the Commission to encourage and facilitate the drawing up of codes of practice in areas relevant to the Act, including transparency. The Code of Practice on marking and labelling of AI-generated content is the primary instrument through which the Commission and the AI Office are translating the Article 50(2) obligation into operational technical guidance.

The drafting process began in November 2025. The first draft was published on 17 December 2025 by independent experts drawing on input from several hundred participants including technology providers, media organisations, civil society, and academic researchers. A second draft was published in March 2026 following a structured feedback round. The final code is expected before June 2026.

Recital 9 of the Act indicates that adherence to an approved code of practice creates a rebuttable presumption of conformity with the corresponding obligations. For deployers, this means that aligning practices with the finalised labelling code, once published, will carry a compliance presumption that reduces enforcement risk materially. The code is voluntary, but in an enforcement context a deployer who has not engaged with it and has implemented no technical marking at all will face a much more difficult position than one who has adopted the code in good faith.

The AI Office, established under Article 64, has responsibility for coordinating the Article 50 code process alongside other GPAI-related codes and guidelines. The Commission has also indicated that it will publish separate guidelines clarifying the scope of the Article 50 obligations and addressing technical aspects not resolved in the code, with those guidelines expected in the second quarter of 2026.

Penalties and enforcement

Article 99 of Regulation (EU) 2024/1689 sets the penalty architecture in three tiers. Breaches of Article 50 fall within the second tier: up to EUR 15 million or 3 per cent of worldwide annual turnover in the preceding financial year, whichever is higher. The same ceiling applies to breaches of the Article 26 operator obligations. There is no separate, lower tier for transparency failures, which reflects the legislative judgment that failing to inform persons of AI interactions is as serious a breach as misusing a high-risk system.

Article 99(6) requires national supervisors to take into account the economic viability of SMEs and startups when setting individual penalty levels. This does not create an exemption. A small organisation deploying emotion recognition technology without disclosure is still in breach of Article 50(3). The provision affects the calibration of the fine, not the existence of the breach.

National market surveillance authorities designated under Article 74 lead enforcement. In most Member States, the authority designated for the AI Act's general provisions will be a new or repurposed national digital regulator. In sectors where the existing sectoral regulator has been designated, for example data protection authorities for biometric and personal-data-intensive use cases, the sectoral authority leads enforcement and is expected to treat Article 50(3) breaches as part of a broader GDPR-adjacent investigation rather than as isolated incidents.

The AI Office plays a coordination role across Member States and takes a lead on enforcement matters involving GPAI model providers. Where a provider's failure to embed technical marking under Article 50(2) cascades into multiple deployers' non-compliance, the AI Office has the tools under Chapter X to initiate an investigation that addresses the supply chain failure at source.

The minimum transparency file for Article 50 compliance

The documentation that a deployer should hold on file to support an Article 50 compliance position is distinct from, but related to, the minimum operator file for Article 26. The following elements represent the minimum set that a national supervisor or internal auditor would expect to find.

First, a system inventory identifying every AI system deployed by the organisation that falls within the scope of Article 50(1), (3), or (4). This includes all chatbots and virtual agents, all emotion recognition or biometric categorisation tools, and all generative AI tools used to produce content for publication. The inventory should record the system name, the provider, the version, and the purpose.

Second, for each Article 50(1) system, evidence that the AI disclosure is implemented: a screenshot or flow diagram showing where and how the disclosure appears, and confirmation that it has not been suppressed in the deployer's configuration.

Third, for each generative AI system that produces content for publication, the provider's confirmation of the technical marking measures in place under Article 50(2), including the C2PA implementation status and watermarking method, and the deployer's own visible disclosure procedure for published outputs.

Fourth, for each Article 50(3) system, the notification procedure: how persons exposed to emotion recognition or biometric categorisation are informed, the timing and format of the notification, and the record that the notification has been given.

Fifth, a deepfake and public-interest text disclosure procedure where applicable, including the template label or disclosure statement, the publication workflow that ensures the label is attached before publication, and the editorial control assessment for the public-interest text exception where claimed.

Sixth, a reference to the provider's Article 50(2) compliance documentation or to the deployer's assessment of which adopted code of practice applies and on what basis adherence is maintained.

On the relationship between Article 50 and Article 13. Article 13 requires providers of high-risk systems to supply instructions for use that are sufficiently transparent for deployers to interpret and act on outputs correctly. Article 50 requires the same chain to extend further, to the natural persons who interact with or are exposed to AI systems, through direct disclosure. The two articles address different audiences. Article 13 transparency runs provider to deployer. Article 50 transparency runs deployer to person. A deployment that satisfies Article 13 but ignores Article 50 is not compliant.

Frequently asked questions

What does Article 50 of the EU AI Act require?

Article 50 of Regulation (EU) 2024/1689 imposes four categories of transparency duty. Paragraph 1 requires providers of AI systems that interact directly with natural persons to design those systems so that users are informed they are engaging with AI. Paragraph 2 requires providers of generative AI systems to mark outputs in a machine-readable format so they can be detected as artificially generated. Paragraph 3 requires deployers of emotion recognition and biometric categorisation systems to inform exposed persons of the system's operation. Paragraph 4 requires deployers of deepfake systems, and deployers generating AI text on matters of public interest, to disclose that the content is AI-generated.

When does Article 50 apply?

Article 50 transparency obligations are part of the provisions that apply from 2 August 2026 under Regulation (EU) 2024/1689. This is the same date on which the high-risk AI system operator regime under Chapter III and the general transparency obligations for AI systems enter full application.

Who bears the obligation under Article 50(1) to tell users they are talking to an AI?

Article 50(1) places the primary obligation on the provider, who must design and develop AI systems intended for direct human interaction so that the disclosure is built in. The deployer carries a co-obligation: a deployer who removes or suppresses the disclosure at deployment time is in breach of Article 50(1). If the provider has already ensured the disclosure at the system level, the deployer must not disable it.

What is a machine-readable marking under Article 50(2)?

Article 50(2) requires that synthetic audio, image, video, and text outputs be marked in a machine-readable format and be detectable as artificially generated or manipulated, considering technical feasibility and the state of the art. The Commission's draft Code of Practice on marking and labelling of AI-generated content, published in December 2025 with a second draft in March 2026, endorses a multi-layered approach combining secured metadata embedding (using standards such as C2PA Content Credentials), invisible watermarking for robustness against removal, and optional fingerprinting and logging. No single technique is considered sufficient.

Does Article 50(3) require emotion recognition systems to obtain consent?

Article 50(3) does not create a consent obligation on its own. It requires deployers of emotion recognition and biometric categorisation systems to inform the natural persons exposed to the system of its operation. However, processing biometric data for emotion recognition or categorisation purposes typically also engages Article 9 of the GDPR, which does require explicit consent or another Article 9(2) exemption. The two obligations run in parallel: the Article 50(3) disclosure duty under the AI Act is in addition to, not a substitute for, the GDPR obligations.

What counts as a deepfake for the purposes of Article 50(4)?

Article 50(4) applies to deployers of AI systems that generate or manipulate image, audio, or video content that appreciably resembles existing persons, places, objects, or events and would falsely appear to a person to be authentic or truthful. The disclosure requirement is triggered by the nature of the output, not by the deployer's intent to deceive. Where the content forms part of an evidently artistic, creative, satirical, or fictional work, the obligation is limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.

Does Article 50 apply to AI-generated text about current events or regulatory analysis?

Article 50(4) requires deployers of AI systems that generate text published on matters of public interest to disclose that the text was AI-generated. The obligation applies regardless of the topic, provided the text concerns matters of public interest, which include news, political commentary, regulatory analysis, and financial market commentary. The obligation does not apply where natural persons have editorial responsibility for the publication and where the content is not substantially different from what a human editor would produce, subject to appropriate disclosure.

Are there exemptions from Article 50 for law enforcement?

Yes. Article 50(1), (3), and (4) all contain an exemption for AI systems authorised by law to detect, prevent, or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties and in accordance with Union law. Article 50(2) contains a similar exemption for AI systems used in lawful criminal detection contexts. The exemptions are narrow: they require both legal authorisation and appropriate safeguards. A private organisation using an AI system for internal fraud detection is not covered by the law enforcement exemption.

What penalties apply for breaching Article 50?

Breaches of Article 50 fall within the second tier of Article 99 of Regulation (EU) 2024/1689, which sets a fine ceiling of EUR 15 million or 3 per cent of worldwide annual turnover, whichever is higher. National market surveillance authorities designated under Article 74 are responsible for enforcement of Article 50 obligations in their jurisdictions. The AI Office plays a coordinating role, particularly where GPAI model providers are involved in the supply chain.

What is the Code of Practice on marking and labelling of AI-generated content?

The Code of Practice on marking and labelling of AI-generated content is a voluntary industry instrument being developed under Article 56 of the AI Act, facilitated by the European Commission and the AI Office. The first draft was published on 17 December 2025. A second draft was published in March 2026. A final code is expected before June 2026, ahead of the 2 August 2026 enforcement date. The code endorses a multi-layered technical approach combining C2PA Content Credentials, invisible watermarking, and visible user-facing disclosures. Adherence to a finalised code creates a rebuttable presumption of compliance with the corresponding Article 50 obligations.

Related reading

For a full account of the operator obligations that apply in parallel to Article 50, see the EU AI Act operator obligations compliance guide. For the Article 13 transparency obligation that runs from provider to deployer in high-risk AI supply chains, see EU AI Act Article 13: transparency for high-risk AI. For the enforcement architecture that will apply these obligations from August, see EU AI Act enforcement: the AI Office and national supervisors explained. For the fundamental rights impact assessment obligation on deployers of certain high-risk systems, see EU AI Act Article 27: the fundamental rights impact assessment every deployer must file. For the liability analysis that determines who is accountable when a disclosure fails, see AI liability chains: how EU law splits responsibility between provider and deployer. To generate your own FRIA document, see the FRIA Generator tool.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 12.7.2024.
  2. Article 50(1), Regulation (EU) 2024/1689: providers must ensure AI systems interacting with natural persons are designed so that users are informed of the AI nature of the interaction.
  3. Article 50(2), Regulation (EU) 2024/1689: providers of generative AI systems must ensure outputs are marked in a machine-readable format and detectable as artificially generated, considering technical feasibility and state of the art.
  4. Article 50(3), Regulation (EU) 2024/1689: deployers of emotion recognition and biometric categorisation systems must inform exposed persons of the system's operation and comply with relevant data protection law.
  5. Article 50(4), Regulation (EU) 2024/1689: deployers of deepfake systems must disclose artificial generation; deployers of AI systems generating public-interest text must label that text as AI-produced, subject to the editorial control exception.
  6. Article 50(5), Regulation (EU) 2024/1689: all Article 50 disclosures must be provided at the latest at the time of first interaction or exposure, in a clear and distinguishable manner conforming to accessibility requirements.
  7. Article 50(6), Regulation (EU) 2024/1689: Article 50 applies without prejudice to Chapter III and other Union or national transparency obligations.
  8. Article 56, Regulation (EU) 2024/1689: authorisation for the Commission to facilitate codes of practice in relevant areas, with adherence creating a rebuttable presumption of conformity.
  9. Article 64, Regulation (EU) 2024/1689: establishment and mandate of the AI Office.
  10. Article 74, Regulation (EU) 2024/1689: designation of national competent authorities and market surveillance authorities for AI Act enforcement.
  11. Article 99, Regulation (EU) 2024/1689: penalty tiers, including second-tier fines of up to EUR 15 million or 3 per cent of worldwide annual turnover for breaches of Article 50 and other operator obligations.
  12. Article 99(6), Regulation (EU) 2024/1689: instruction to national supervisors to take into account the economic viability of SMEs and startups when calibrating individual penalty levels.
  13. European Commission, Draft Code of Practice on Marking and Labelling of AI-Generated Content (First Draft), published 17 December 2025, AI Office, Brussels.
  14. European Commission, Draft Code of Practice on Marking and Labelling of AI-Generated Content (Second Draft), published March 2026, AI Office, Brussels.
  15. Coalition for Content Provenance and Authenticity (C2PA), C2PA Technical Specification, version 2.x, available at contentauthenticity.org. The C2PA standard defines the structure and signing requirements for Content Credentials as referenced in the EU AI Office draft labelling code.
  16. Regulation (EU) 2016/679 (General Data Protection Regulation), Article 9, processing of special categories of personal data including biometric data, relevant to the operation of Article 50(3) of the AI Act in the emotion recognition and biometric categorisation context.
  17. Article 13, Regulation (EU) 2024/1689: provider obligation to ensure high-risk AI systems are transparent and accompanied by instructions for use meeting the Article 13(3) minimum content standard.