Article 27 of the EU AI Act is not a theoretical instrument. It is a mandatory pre-deployment obligation with a hard deadline, a supervisor notification requirement, and a second-tier penalty ceiling of EUR 15 million. This article is the deliverable: a checklist, a template, and a filing guide, not a description of what those things are.

Key takeaways

  • Three categories of deployer must complete a FRIA: bodies governed by public law, private providers of public services, and deployers of Annex III point 5(b) or 5(c) systems (creditworthiness and life or health insurance risk).
  • The FRIA must contain seven elements under Art. 27(1)(a) through (g): process description, usage period and frequency, affected populations, specific harm risks, human oversight measures, risk materialisation plan, and a complaint mechanism.
  • The 2 August 2026 deadline is the entry-into-application date for Chapter III deployer provisions. No grandfathering exemption exists for systems already in production.
  • Art. 27(3) requires the deployer to notify the relevant market surveillance authority of FRIA results before first deployment. Exemptions are narrow, confined to public security and life-protection situations under Art. 46(1).
  • An existing GDPR Art. 35 DPIA may complement but cannot replace a FRIA. Art. 27(4) permits the two instruments to be conducted jointly. A DPIA alone does not satisfy the Art. 27 obligation.
  • Non-compliance falls within the second penalty tier under Art. 99(4): up to EUR 15 million or 3 per cent of worldwide annual turnover, whichever is higher.
  • The minimum FRIA file has six identifiable sections, mapped to Art. 27(1)(a)-(g), plus a cover sheet and a supervisor notification record.

Section 1: The 90-day countdown as of 23 April 2026

Today is 23 April 2026. The primary compliance deadline under the EU AI Act for deployers subject to Article 27 is 2 August 2026. That is 101 calendar days. A FRIA for a non-trivial deployment typically requires three to six weeks of internal work: scoping, system characterisation, population mapping, risk identification, and governance design. Notification to the market surveillance authority must occur before first deployment, not on the same day. For systems already in production, the assessment must be complete before 2 August, not after.

The practical consequence is that deployers who have not started should start this week. A two-week scoping exercise, a four-week assessment process, a two-week internal review, and a one-week notification window leaves almost no margin. The calendar below sets the milestones.

Milestone calendar: 23 April to 2 August 2026
  • 23 April 2026 (today): Scoping begins. Determine whether Art. 27 applies.
  • 6 May 2026: Scoping complete. FRIA process initiated or formally documented as not required.
  • 20 May 2026: System characterisation complete. Provider documentation compiled.
  • 3 June 2026: Affected population map finalised.
  • 17 June 2026: Risk identification complete. All harm categories documented.
  • 1 July 2026: Oversight and mitigation design complete. Governance arrangements documented.
  • 15 July 2026: Internal legal and compliance review complete. DPIA alignment verified.
  • 22 July 2026: FRIA finalised. Notification submitted to market surveillance authority.
  • 2 August 2026: Provisions enter application. FRIA must be on file and authority notified.

Thirteen weeks from today to the recommended filing date. Seven workstreams. The sections that follow provide the substance for each.

Section 2: Who must file a FRIA

Article 27(1) does not apply to all deployers of high-risk AI systems. It applies to a defined subset. The trigger is not purely the risk classification of the system. It is the combination of system classification and the type of entity deploying it, or the specific use case. The following decision tree decodes the three trigger categories.

Trigger A: Bodies governed by public law

Any public authority, public body, or entity established under public law that deploys a high-risk AI system listed in Annex III, other than systems in points 1 (biometrics for law enforcement, border control, or judicial use governed by sectoral rules) or in a subset of points 6 and 7, is subject to Article 27. This captures central government departments, regional administrations, local councils, public universities, public hospitals, public housing authorities, and any body that exercises public functions under a specific statutory mandate. The term "body governed by public law" is the same term used in public procurement law (Directive 2014/24/EU, Art. 2(1)(4)) and carries the same meaning: a body established for specific purposes of meeting needs in the general interest, subject to management supervision by a public authority.

Trigger B: Private entities providing public services

Private entities that provide services of a public nature, such as private operators contracted to deliver social benefits, healthcare services, housing support, or employment services on behalf of a public body, are included within the Article 27 scope. The relevant consideration is the nature of the service, not the legal form of the entity. A private company contracted to operate a social welfare assessment platform on behalf of a municipal authority is providing a public service. A private employment agency operating a CV screening tool for its own commercial recruitment clients is not.

Trigger C: Deployers of Annex III point 5(b) and 5(c) systems

Regardless of whether the deployer is a public or private entity, any deployer using a system that falls within Annex III, points 5(b) and 5(c) must complete a FRIA.

Annex III, point 5(b) covers AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud.

Annex III, point 5(c) covers AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.4

The following reference table summarises the trigger logic.

Entity type System classification FRIA required?
Public body (governed by public law) Any Annex III system (excluding biometrics under Art. 5 prohibitions) Yes
Private entity providing public services Any Annex III system used in delivery of public services Yes
Any deployer (public or private) Annex III point 5(b): creditworthiness or credit scoring Yes
Any deployer (public or private) Annex III point 5(c): life and health insurance risk or pricing Yes
Private entity not providing public services Annex III point 4 (employment, worker management) No (Art. 26 applies but not Art. 27)
Private entity not providing public services Annex III point 5(a): access to essential private-sector services No (unless public service element present)

Section 3: The seven required contents of a FRIA

Article 27(1) specifies that the assessment must include at minimum six categories of information, identifiable as sub-paragraphs (a) through (f) in the enacted text, with the complaint mechanism embedded within (f). Many practitioners and commentators enumerate these as seven elements by separating the governance arrangements from the complaint mechanism. This article uses the seven-element reading, which is operationally more useful.

Art. 27(1)(a): Description of the deployer's processes

Statutory language: "a description of the deployer's processes in which the high-risk AI system will be used in accordance with the instructions for use."

This element requires the deployer to describe, in concrete terms, how the system fits into the operational workflow. It is not the provider's description of what the system does. It is the deployer's own account of the process context. A credit institution deploying an Annex III point 5(b) scoring model must describe the credit application intake process, the point at which the model receives its inputs, the form in which outputs are presented to human reviewers, and the escalation path for borderline decisions. A general description of the model's architecture does not satisfy this element. A process diagram accompanied by a written description does.

Art. 27(1)(b): Period and frequency of use

Statutory language: "a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used."

This element captures both the temporal scope of the deployment and the volume of decisions the system will influence. A life insurer running an Annex III point 5(c) pricing model must specify whether the model is used for all new applications, for renewals, or for both; the expected annual volume of assessments; and any planned review dates for the deployment. Where the frequency is variable or demand-driven, the deployer should provide a range and confirm the monitoring mechanism.

Art. 27(1)(c): Affected categories of persons and groups

Statutory language: "the categories of natural persons and groups likely to be affected by its use in the Union."

This is the population mapping element. It requires the deployer to identify, with specificity, who the system will affect. A public housing authority using an eligibility assessment system must identify the applicant population by relevant demographic characteristics: age ranges, language groups, disability status, immigration status, and any other factor relevant to the assessment. The element requires the deployer to look beyond direct decision subjects and consider indirect effects, for example family members of an applicant whose application is scored by an automated system.

Art. 27(1)(d): Specific risks of harm

Statutory language: "the specific risks of harm likely to have an impact on the categories of natural persons or groups identified pursuant to point (c), having regard to the information given by the provider pursuant to Article 13."

This is the risk identification element. The cross-reference to Article 13 is significant. Article 13 requires providers to supply deployers with clear, accurate, and relevant information about the system's intended purpose, performance characteristics, limitations, and known biases. The deployer's risk assessment must incorporate that provider-supplied information and go further by assessing how those known system characteristics interact with the deployer's specific context. A credit scoring model may have documented lower accuracy for applicants with thin credit files. If the deployer's population includes a high proportion of such applicants, that documented limitation creates a specific and quantifiable risk of harm that must appear in the FRIA.

Art. 27(1)(e): Human oversight measures

Statutory language: "a description of the implementation of human oversight measures, according to the instructions for use."

This element requires the deployer to describe the oversight it will actually implement, not the oversight the provider recommended in the abstract. The description must be specific: which individuals will exercise oversight, what competence and authority they hold, how they access the system's outputs and logs, what threshold triggers their intervention, and what decisions they can take. An oversight description that reads "a human will review all flagged cases" is insufficient. A description that names the role, sets the review interval, specifies the conditions for override, and provides an escalation path to a named senior decision-maker begins to satisfy the element.

Art. 27(1)(f): Measures if risks materialise

Statutory language: "the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms."

This element combines two related requirements. The first is the contingency plan: what the deployer will do if a risk identified in element (d) actually causes harm. This includes suspension protocols, affected-person notification procedures, and coordination with the provider under Article 26(5) incident reporting requirements. The second is the complaint mechanism: a concrete, accessible channel through which affected persons can raise concerns about the system's outputs. The mechanism must be operational, not hypothetical. A reference to a generic customer service email is not sufficient. A named complaints procedure, with a designated handler and a defined response timeline, is closer to what supervisors will expect.

Art. 27(1)(g): Governance arrangements (complement to (f))

Several national supervisors and commentators treat the governance arrangements embedded in Art. 27(1)(f) as a distinct seventh element, given their operational scope. The governance section of the FRIA should identify the internal body or senior officer responsible for FRIA compliance, the review schedule, and the trigger conditions for a revised or supplementary assessment. Where the deployment involves a consortium of public bodies or a contracted private operator, the governance section must clarify which entity owns the FRIA and bears the notification duty under Art. 27(3).

Section 4: The 90-day checklist

The following checklist runs from 23 April 2026 to 22 July 2026, leaving a ten-day buffer before the 2 August 2026 provisions enter application. Each week has three to five concrete actions. The checklist is structured by workstream, not by job title. A deployer with a small compliance function may complete multiple workstreams in parallel.

Weeks 1 and 2: 23 April to 6 May 2026 — Scoping and trigger verification

  1. Compile a complete inventory of all AI systems currently in production or scheduled for deployment before 1 September 2026.
  2. For each system, confirm the Annex III classification. If the provider has not supplied a conformity declaration, request one immediately under Art. 13.
  3. Apply the Article 27 trigger test: public law entity? private public-service provider? Annex III 5(b) or 5(c) system regardless of entity type?
  4. Document the scoping conclusion in writing with a named author and a date. Systems that do not trigger Art. 27 should be recorded as such, with reasoning.
  5. Assign a named FRIA owner for each system that does trigger Art. 27. This person is accountable for all subsequent steps.

Weeks 3 and 4: 7 May to 20 May 2026 — System characterisation

  1. Obtain and read the provider's Art. 13 transparency documentation in full. Extract all statements about intended purpose, operational limits, known limitations, and accuracy characteristics by population group.
  2. Map the deployer's intended use against the provider's stated intended purpose. Identify any divergence. Divergences must be resolved before the FRIA can be completed: either the usage is brought within the intended purpose or the provider must be consulted about an extended use case.
  3. Document the deployment process in workflow form. A simple diagram showing input sources, the decision point where the AI system is engaged, the form of the output, and the human review step is sufficient.
  4. Confirm the planned deployment period, review schedule, and expected decision volume. Record as a written statement.
  5. Check whether an existing GDPR Art. 35 DPIA covers this system. If so, retrieve it. It will be used in Section 6 of the FRIA process.

Weeks 5 and 6: 21 May to 3 June 2026 — Affected population mapping

  1. Identify every category of natural person who will interact with, or be subject to decisions influenced by, the system.
  2. For each category, note any characteristics that may affect vulnerability to the risks the system poses: age, health status, financial position, language, disability, immigration status, and similar factors relevant to the deployment context.
  3. Extend the mapping to indirect third parties. If a credit scoring system assesses an applicant who is a sole trader, the financial impact of an adverse decision may extend to employed staff and dependants.
  4. Quantify the affected population where possible. An assessment that names "social housing applicants in Region X" and provides an estimated annual volume is materially stronger than one that refers to "affected persons" in the abstract.
  5. Review any existing equality impact assessments or data protection impact assessments for additional population data. These are not substitutes for the FRIA population mapping, but they are a useful input.

Weeks 7 and 8: 4 June to 17 June 2026 — Risk identification

  1. For each population category identified, list the specific risks of harm that the system's use may generate. Draw directly from the provider's Art. 13 documentation and from the system's known performance characteristics.
  2. Assess the likelihood and severity of each risk. A risk that is low probability but very high impact (for example, incorrect denial of social benefit to a household in acute need) requires the same documentation rigour as a higher-probability, lower-impact risk.
  3. Identify any systemic bias risks. Where the system's training data has known limitations, assess whether those limitations are likely to produce systematically worse outcomes for any of the population groups identified in weeks 5 and 6.
  4. Check the system against the EU Agency for Fundamental Rights Handbook on European non-discrimination law and the Charter of Fundamental Rights. Both are public documents and provide a structured rights inventory against which to check the risk register.
  5. Document each risk in a numbered register. Each entry should name the risk, the affected group, the rights dimension engaged, and the evidence base.

Weeks 9 and 10: 18 June to 1 July 2026 — Oversight and mitigation design

  1. Draft the human oversight section. Name the individuals or roles responsible for oversight. State their competence and authority. Describe the threshold conditions that trigger intervention.
  2. Design the contingency plan for each risk in the risk register. For each risk, the plan must address: how the deployer will detect that the risk has materialised, who will take the first action, what that action is, how affected persons will be informed, and how the incident will be reported to the provider and, if necessary, the market surveillance authority under Art. 26(5).
  3. Design the complaint mechanism. Name a designated handler, set a response deadline, and decide whether the mechanism is open to all affected persons or only to decision subjects. Note: if the system affects persons who may have difficulty accessing a written complaints procedure (for example, applicants with low literacy or limited digital access), the mechanism must make provision for those cases.
  4. Draft the governance section. Name the internal body or senior officer accountable for ongoing FRIA compliance. Set a review schedule. Define the trigger conditions for a supplementary assessment.
  5. Verify that the oversight design complies with Art. 14 requirements. The system must have been built to enable the oversight you are planning to staff. If the provider's design does not allow meaningful human override, raise this with the provider now, before the deadline.

Weeks 11 and 12: 2 July to 15 July 2026 — Review and DPIA alignment

  1. Circulate the draft FRIA to internal legal and compliance reviewers. Allow at least five working days for review.
  2. Conduct the Art. 27(4) alignment exercise: compare the draft FRIA against any existing GDPR Art. 35 DPIA. Identify overlapping elements, flag gaps in the DPIA that the FRIA must fill, and consolidate the documents where possible without losing required elements from either.
  3. Consider whether external consultation is appropriate. Article 27 does not mandate consultation with affected groups or independent experts, but several supervisors have stated informally that they view such consultation as indicative of a thorough assessment. For public bodies, existing public sector equality and participation obligations may impose a consultation duty independently of the AI Act.
  4. Finalise all sections. A FRIA with a blank or placeholder section is not a completed assessment and will not satisfy the Art. 27(3) notification requirement.
  5. Identify the relevant market surveillance authority in your jurisdiction. Confirm the notification procedure: some authorities have already published notification forms, while others have indicated that a submission of the completed assessment by email is sufficient pending formal template publication by the AI Office.

Week 13: 16 July to 22 July 2026 — Notification and filing

  1. Submit the FRIA notification to the relevant market surveillance authority under Art. 27(3). Follow the authority's published procedure. Retain a timestamped copy of the submission and any acknowledgement.
  2. File the completed FRIA alongside the Art. 26 operator compliance file. The FRIA is the first document a supervisor is likely to request during an enforcement inquiry.
  3. Update the FRIA register to reflect the submission date, the authority notified, and the planned review date.
  4. Brief senior decision-makers on the FRIA contents, the oversight obligations, and the complaint mechanism. The persons named in the oversight section must understand their responsibilities before deployment begins.
  5. Confirm readiness for 2 August 2026. The system may be lawfully deployed from that date, provided the FRIA is complete and the notification has been submitted.

Section 5: The FRIA template structure

The AI Office has been mandated under Article 27(5) to develop a questionnaire template, including through an automated tool, to assist deployers. As of April 2026, no finalised AI Office template has been published. The structure below is an editorial template proposed by this publication. It maps to Art. 27(1)(a)-(f) and incorporates the governance element separately. It is not legal advice. Deployers should adapt it to their specific context and seek qualified legal counsel before filing.

Editorial template: Fundamental Rights Impact Assessment under Article 27, Regulation (EU) 2024/1689

Cover sheet
Name and address of deployer | Legal form | Contact person and title | Date of assessment | Version number | System identifier (as per provider documentation) | Annex III classification | Art. 27 trigger category (A, B, or C) | Named FRIA owner

Section 1 (Art. 27(1)(a)): Deployment process description
Overview of the organisational process in which the system is used. Process diagram or narrative description. Point at which the system is engaged. Form and format of system output. Human review step and escalation path. Reference to provider's instructions for use.

Section 2 (Art. 27(1)(b)): Deployment period and frequency
Start date of deployment. Planned review or renewal date. Expected volume of decisions influenced per month or year. Frequency of system invocation. Any planned changes to scope or volume.

Section 3 (Art. 27(1)(c)): Affected populations
List of all categories of natural persons directly affected. List of all categories indirectly affected. Relevant characteristics of each category. Estimated population size. Basis for the population assessment.

Section 4 (Art. 27(1)(d)): Risk register
Numbered risk entries, each including: risk identifier, description, affected population category, fundamental right engaged (with Charter reference), likelihood assessment, severity assessment, basis in provider documentation (Art. 13 reference), and any additional deployer-specific evidence.

Section 5 (Art. 27(1)(e)): Human oversight measures
Named role(s) responsible for oversight. Competence and training requirements. Authority to intervene or override. Oversight trigger thresholds. Logging and audit trail. Escalation path to senior decision-maker.

Section 6 (Art. 27(1)(f)): Risk materialisation plan and complaint mechanism
Contingency plan for each risk in the Section 4 register. Suspension protocol. Affected-person notification procedure. Incident reporting chain under Art. 26(5). Complaint mechanism description. Designated complaints handler. Response timeline.

Section 7: Governance arrangements
Internal body or senior officer accountable for FRIA compliance. Review schedule. Conditions triggering a supplementary assessment. Record of any prior assessments relied upon under Art. 27(2). Relationship to existing GDPR Art. 35 DPIA (see Section 8 below).

Section 8: DPIA alignment record (Art. 27(4))
Reference to existing DPIA, if any. Mapping of overlapping elements. Gaps in the DPIA not addressed by the FRIA. Gaps in the FRIA not addressed by the DPIA. Decision on integration or separate maintenance.

Annex A: Supervisor notification record
Authority notified. Notification date. Notification method. Reference number or acknowledgement. Contact at the authority.

Annex B: Provider documentation index
List of all provider documentation relied upon, with version numbers and dates.

Section 6: DPIA and FRIA interoperability

Article 27(4) provides that "the fundamental rights impact assessment referred to in this Article shall complement, where applicable, the data protection impact assessment referred to in Article 35 of Regulation (EU) 2016/679 and in Article 27 of Directive (EU) 2016/680." The operational meaning is precise: a FRIA does not replace a DPIA, but the two may be conducted jointly and documented in an integrated instrument, provided all elements required by both are present.

The practical logic for conducting them together is strong. Both instruments require a description of the processing or deployment context. Both require identification of affected persons. Both require a risk register. Both require a mitigation plan. The difference is in the scope of rights covered. A DPIA is confined to risks arising from personal data processing and their impact on data protection rights under Regulation (EU) 2016/679. A FRIA is concerned with the full spectrum of fundamental rights guaranteed by the EU Charter: non-discrimination, human dignity, freedom of expression, access to justice, and others, for all persons affected, not only those whose personal data is processed.

The following mapping table shows where the two instruments overlap and where the FRIA requires additional work.

FRIA element (Art. 27(1)) Covered by DPIA (Art. 35 GDPR)? Gap to fill in FRIA
(a) Deployment process description Partially. DPIA covers data flows, not the full operational process. Add non-data process steps, decision workflow, and human review mechanism.
(b) Period and frequency of use Partially. DPIA typically covers processing duration and volume. Align FRIA and DPIA statements. Add decision volume if not in DPIA.
(c) Affected categories of persons Partially. DPIA covers data subjects. FRIA is broader. Add persons affected by system outputs who are not personal data subjects.
(d) Specific risks of harm No. DPIA is limited to data protection risks. Add all non-data-protection fundamental rights risks: non-discrimination, access to services, dignity, and others.
(e) Human oversight measures No. Not a DPIA requirement. Full Art. 27(1)(e) content required. No DPIA equivalent.
(f) Risk materialisation plan and complaint mechanism Partially. DPIA requires mitigation measures for data protection risks and a DPA consultation threshold. Add non-data mitigation measures. Add Art. 27 complaint mechanism. Align incident reporting with Art. 26(5).

The EDPB has announced that it is working on guidelines on the interplay between the GDPR and the AI Act. Those guidelines had not been published as of April 2026. When they are published, deployers with integrated FRIA and DPIA documents should review them for any additional alignment requirements.13

Section 7: The supervisor notification under Article 27(3)

Article 27(3) creates a positive obligation to notify. The text provides that "deployers shall notify the supervisory authority of market surveillance, designated in accordance with Article 70, of the results of the assessment carried out pursuant to paragraph 1 of this Article, prior to putting into service the high-risk AI systems referred to in paragraph 1, unless an exemption pursuant to Article 46(1) applies."

Three aspects require specific attention.

What must be notified

The obligation is to notify the results of the FRIA, not to submit the entire document. In practice, most supervisors are likely to request the full document when reviewing notifications, and several have indicated informally that they expect the completed assessment to accompany the notification in any case. Prudent deployers should treat the notification as a submission of the full FRIA, structured in accordance with the template in Section 5 above.

To which authority

The designated authority varies by Member State and by the sector in which the system operates. Article 70 of the Regulation requires each Member State to designate one or more national competent authorities. Several Member States have designated their national data protection authority as the market surveillance authority for certain categories of high-risk AI system, given the existing expertise and enforcement infrastructure those bodies possess.

France has designated the Commission Nationale de l'Informatique et des Libertés (CNIL) as the competent authority for AI systems that involve personal data processing, with sector-specific authorities covering other cases. Germany has designated the Bundesbeauftragte für den Datenschutz und die Informationsfreiheit (BfDI) for comparable categories. The Netherlands Autoriteit Persoonsgegevens (AP) has published an AI and algorithmic regulation report and has indicated that it will treat the FRIA as a primary supervisory document. Ireland's Data Protection Commission (DPC) holds jurisdiction for AI systems involving personal data processing operating under Irish law. For insurance-related systems under Annex III point 5(c), national financial services supervisors may share or hold primary jurisdiction depending on the Member State's designation framework.14

Deployers operating across multiple Member States should identify the relevant authority in each jurisdiction where the system is deployed, since the AI Act does not create a single-point-of-contact notification mechanism for multi-jurisdictional deployments.

Exemptions under Article 46(1)

Article 46(1) provides a narrow exemption from the Art. 27(3) notification obligation where disclosure of the FRIA results would jeopardise public security or life protection. This exemption is not a general derogation. It is designed for deployments in sensitive operational contexts, such as law enforcement or border control, where notification of a specific risk assessment to a supervisory register could create security exposure. Commercial deployers in the financial and insurance sectors should not expect to rely on Art. 46(1).

Section 8: Penalties under Article 99

Regulation (EU) 2024/1689 creates a three-tier penalty architecture. The structure is set out in Article 99. Deployers subject to Article 27 are exposed to the second tier.

The second tier: Article 99(4)

Article 99(4) provides that non-compliance with obligations applicable to deployers of high-risk AI systems under the Regulation, where not governed by paragraph 3 (the highest tier, which applies to prohibited practices under Article 5), shall be subject to administrative fines of up to EUR 15 000 000 or, if the offender is an undertaking, up to 3 per cent of its total worldwide annual turnover for the preceding financial year, whichever is higher.

A failure to conduct a FRIA before deployment, a failure to notify the market surveillance authority under Art. 27(3), or a material deficiency in the FRIA contents that renders the assessment formally incomplete each constitutes a breach of the Art. 27 obligation and falls within Article 99(4).

How supervisors are expected to apply penalties

Article 99(1) requires supervisors to take into account a list of factors including: the nature, gravity, and duration of the infringement; the intentional or negligent character; actions taken to mitigate damage; the degree of responsibility; the technical and organisational measures implemented; the manner in which the authority became aware; whether other penalties have already been applied for the same facts; and cooperation with the investigation. These factors map onto a structured sanctioning analysis that experienced practitioners will recognise from GDPR enforcement.

In the early enforcement period, supervisors are likely to prioritise cases where the absence of a FRIA correlates with a documented instance of harm to affected persons. A deployer that failed to complete a FRIA, was made aware of a complaint by an affected person, and still did not act presents a very different enforcement profile from a deployer that completed a substantially good-faith assessment with minor procedural deficiencies.

SME mitigation under Article 99(6)

Article 99(6) provides that when deciding on the amount of the administrative fine, the competent authority shall have due regard to the interests of small and medium-sized enterprises, including start-ups, and their economic viability. This is a mitigating instruction to the supervisor, not an exemption for SMEs. An SME that deliberately avoids completing a FRIA cannot rely on Art. 99(6) as a shield. An SME that made a genuine compliance effort and produced an assessment with identifiable procedural gaps stands in materially better position than one that produced nothing.

Section 9: Common failure modes

Early compliance reviews across European deployers preparing for the August 2026 deadline have identified five recurring failure modes.

Failure mode 1: Treating the FRIA as a one-off certification

Article 27(2) makes clear that a FRIA produced for one deployment context may be relied upon for a substantially similar subsequent deployment, but must be updated if material circumstances change. Deployers that complete the FRIA as a filing exercise and then never review it will find that a change in the affected population, the system's operational parameters, or the provider's technical documentation has made their assessment materially inaccurate. National supervisors have stated that they will treat an outdated FRIA as equivalent to an absent one for enforcement purposes.

Failure mode 2: Confusing the FRIA with the DPIA

Deployers with mature data protection functions sometimes conflate the two instruments and assume that a completed GDPR Art. 35 DPIA satisfies the FRIA obligation. It does not. The DPIA is a necessary but not sufficient condition for FRIA compliance where personal data is processed. The non-data-protection elements of the FRIA, particularly the full fundamental rights risk register under Art. 27(1)(d), cannot be derived from a DPIA. They require independent assessment against the Charter of Fundamental Rights.

Failure mode 3: Skipping the mitigation plan

Several early-draft FRIAs reviewed by practitioners contain thorough risk identification sections and then stop. A risk register without a corresponding mitigation plan does not satisfy Art. 27(1)(f). The element specifically requires "measures to be taken in the case of the materialisation of those risks." For each risk identified, there must be a named response. The absence of a mitigation plan is also a red flag to supervisors: it suggests the deployer conducted the risk identification as a compliance exercise without intending to act on the findings.

Failure mode 4: Missing a functioning complaint mechanism

Article 27(1)(f) requires the FRIA to describe "the arrangements for internal governance and complaint mechanisms." Deployers that refer to a generic customer service function without designing a specific AI complaint pathway are not compliant. The complaint mechanism must be accessible to the persons affected by the system's outputs, which may include persons with limited digital literacy, limited language skills, or disabilities. A mechanism that is technically available but practically inaccessible to a significant portion of the affected population is unlikely to satisfy a supervisory review.

Failure mode 5: Assuming the provider carries the FRIA duty

Some deployers have taken the position that the provider's conformity assessment under Article 43, or the provider's self-certification where applicable, covers the FRIA obligation. It does not. The provider's conformity assessment demonstrates that the system meets the technical requirements applicable to high-risk AI systems. The FRIA is a deployer-specific document that assesses the impact of that system in the deployer's specific operational context, with the deployer's specific user population. No provider document can substitute for it. A deployer that relies on provider documentation as its FRIA has produced no FRIA.

Section 10: Frequently asked questions

Who must complete a FRIA under the EU AI Act?

Article 27(1) of Regulation (EU) 2024/1689 requires three categories of deployer to complete a Fundamental Rights Impact Assessment before first deployment. The first category is bodies governed by public law. The second is private entities providing public services such as education, healthcare, housing, and social benefits. The third is any deployer using a high-risk AI system listed in Annex III point 5(b) (creditworthiness assessment or credit scoring) or Annex III point 5(c) (risk assessment and pricing in life and health insurance). Other deployers of high-risk systems under Article 6(2) are not subject to Article 27 unless they fall into one of these three categories.

Is the FRIA the same as a DPIA?

No. A Data Protection Impact Assessment under GDPR Article 35 is confined to risks arising from personal data processing. A Fundamental Rights Impact Assessment under AI Act Article 27 covers the full spectrum of rights guaranteed by the EU Charter of Fundamental Rights, including non-discrimination, human dignity, access to justice, and freedom of expression, for all persons affected by the system regardless of whether their personal data is processed. The two instruments can be conducted concurrently. Article 27(4) explicitly permits a FRIA to complement an existing DPIA rather than replace it, but the DPIA alone does not satisfy the FRIA obligation.

What is the deadline for the first FRIA?

The FRIA obligation under Article 27(1) applies from 2 August 2026, the date on which the Chapter III deployer provisions of Regulation (EU) 2024/1689 enter into application. For systems already in operation on that date, the assessment must be completed before the provisions activate. There is no grandfathering exemption for existing deployments. The AI Office has been mandated under Article 27(5) to provide a questionnaire template to assist deployers, but the obligation to assess exists independently of any template the AI Office publishes.

Who receives the FRIA notification under Article 27(3)?

Article 27(3) requires the deployer to notify the relevant market surveillance authority of the results of the FRIA. The authority varies by Member State and by the sector in which the system operates. Several Member States have designated their national data protection authority as the market surveillance authority for certain high-risk categories. France has designated the CNIL, Germany the BfDI, the Netherlands the Autoriteit Persoonsgegevens, and Ireland the Data Protection Commission. For financial and insurance applications, sectoral supervisors may share jurisdiction. Deployers should confirm the designated authority in their jurisdiction before filing.

What are the penalties for a missed FRIA?

A failure to conduct or notify a FRIA as required by Article 27 constitutes a breach of Chapter III of Regulation (EU) 2024/1689 and falls within the second penalty tier under Article 99(4). The maximum administrative fine is EUR 15 million or 3 per cent of worldwide annual turnover, whichever is higher. Article 99(6) requires supervisors to consider the economic viability of SMEs and startups when calibrating penalties, but this is a mitigating factor, not an exemption. Supervisors may impose interim measures, require suspension of deployment, or order remediation as part of enforcement.

Can a FRIA be delegated to the provider?

No. The obligation in Article 27(1) attaches to the deployer, not the provider. A deployer may draw on information and documentation provided by the provider under Article 13 (transparency requirements) and use the provider's technical documentation to populate sections of the FRIA. However, the assessment itself must reflect the deployer's own processes, the specific population the deployer serves, and the deployer's own mitigation plan. A provider's pre-completed risk assessment or conformity documentation does not satisfy the Article 27 obligation. The deployer signs the FRIA and bears the regulatory duty.

Does a FRIA need to be made public?

The AI Act does not require deployers to publish a FRIA. The obligation is to conduct the assessment and to notify the relevant market surveillance authority of its results under Article 27(3). However, the FRIA must be made available to the supervisor on request under the general access-to-documents provisions. Public bodies subject to freedom-of-information obligations in their Member State may face separate disclosure requirements under national law. Some supervisors have indicated informally that they expect proactive disclosure from public sector deployers. Deployers should seek local legal advice on whether domestic transparency norms apply.

When must a FRIA be updated?

Article 27(2) provides that the assessment applies at first deployment but must be updated if the deployer uses the same system in substantially similar circumstances, in which case an earlier assessment may be relied upon. Where circumstances change materially, a new or revised assessment is required. Material changes include changes to the population served, the intended purpose of the system, the data inputs used, the human oversight arrangements, or the regulatory environment. Several national supervisors have indicated they regard the FRIA as a living document subject to periodic review rather than a one-time compliance checkpoint.

Does the FRIA apply to generative AI agents?

The FRIA obligation under Article 27 is triggered by classification as a high-risk system under Article 6(2) and Annex III, not by the underlying model architecture. A generative AI agent that is used to make or substantially influence decisions on creditworthiness, insurance pricing, access to education, employment selection, or essential public services falls within the Annex III categories and is subject to the Article 27 obligation if deployed by a qualifying entity. A general-purpose large language model used solely for internal drafting that does not influence decisions on natural persons does not trigger the FRIA.

Is a FRIA required before or after deployment?

Before. Article 27(1) uses the words "before putting such systems into service." The assessment is a precondition for deployment, not a post-deployment audit obligation. The notification to the market surveillance authority under Article 27(3) must also be made before first use. Deployers that begin using a high-risk system on or after 2 August 2026 without a completed FRIA are in breach from day one of deployment. For systems already in production before 2 August 2026, the assessment must be completed and the authority notified before that date.

Section 11: Related reading

For the doctrinal foundation that informs this checklist, see EU AI Act Article 27: The Fundamental Rights Impact Assessment every deployer must file, published today on this desk. For the broader operator obligations regime of which Article 27 forms a part, see EU AI Act operator obligations: a 2026 compliance guide. For the human oversight design requirement that the FRIA must document under Art. 27(1)(e), see EU AI Act Article 14: the human oversight design requirement explained. For the enforcement architecture that will receive the Art. 27(3) notification, see EU AI Act enforcement: the AI Office and national supervisors explained.

For the broader AI agent insurance and certification landscape as the EU AI Act deadline approaches, see agentliability.co for the transatlantic operator liability perspective, agentcertified.eu for agent certification frameworks, and agentinsured.eu for coverage structures that map to the FRIA risk categories documented here.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 12.7.2024. The Regulation entered into force on 1 August 2024.
  2. Article 27(1), Regulation (EU) 2024/1689. The obligation applies before deployment and covers the six elements set out in sub-paragraphs (a) through (f), together with the governance and complaint mechanism requirements embedded in (f).
  3. Article 27(2), Regulation (EU) 2024/1689. Permits reliance on earlier assessments in substantially similar deployment contexts, with mandatory update where circumstances change.
  4. Annex III, point 5(b) and 5(c), Regulation (EU) 2024/1689. Point 5(b) covers AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with exception for fraud detection systems. Point 5(c) covers AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.
  5. Article 27(3), Regulation (EU) 2024/1689. Notification of FRIA results to the market surveillance authority designated under Article 70, prior to first deployment. Exemption under Article 46(1) applies in public security or life-protection contexts.
  6. Article 27(4), Regulation (EU) 2024/1689. The FRIA shall complement, where applicable, the data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680.
  7. Article 27(5), Regulation (EU) 2024/1689. The AI Office shall develop a template for a questionnaire, including through an automated tool, to facilitate deployers in complying with their obligations under Article 27 in a simplified manner.
  8. Article 99(4), Regulation (EU) 2024/1689. Second-tier penalty ceiling: EUR 15 000 000 or 3 per cent of total worldwide annual turnover, whichever is higher, for non-compliance with obligations applicable to deployers of high-risk AI systems.
  9. Article 99(6), Regulation (EU) 2024/1689. Supervisors shall have due regard to the interests of SMEs, including start-ups, and their economic viability when setting penalty amounts.
  10. Article 13, Regulation (EU) 2024/1689. Transparency and provision of information to deployers: providers must ensure that high-risk AI systems are accompanied by instructions for use including relevant information on the system's intended purpose, performance characteristics, accuracy metrics, and known limitations.
  11. Article 6(2) and Annex III, Regulation (EU) 2024/1689. Classification of high-risk AI systems by use case, covering biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.
  12. Article 35, Regulation (EU) 2016/679 (General Data Protection Regulation). Data Protection Impact Assessment obligation for processing operations likely to result in a high risk to the rights and freedoms of natural persons.
  13. European Data Protection Board, Letter to the AI Office on the role of data protection authorities in the AI Act framework, 2024. The EDPB confirmed it is working on guidelines on the interplay between the GDPR and the AI Act. As of April 2026, final guidelines had not been published.
  14. Commission Nationale de l'Informatique et des Libertés (CNIL), France: national designation as market surveillance authority for AI systems involving personal data. Bundesbeauftragte für den Datenschutz und die Informationsfreiheit (BfDI), Germany: comparable designation. Autoriteit Persoonsgegevens (AP), Netherlands: AI and Algorithmic Regulation report, 2024, confirming supervisory intent to treat the FRIA as a primary enforcement document. Data Protection Commission (DPC), Ireland: designated authority for AI systems involving personal data processing under Irish jurisdiction.
  15. European Insurance and Occupational Pensions Authority (EIOPA), Opinion on Artificial Intelligence Governance and Risk Management, 2024. Sets out supervisory expectations for insurers using AI in underwriting, pricing, and claims, with specific attention to the Annex III point 5(c) use case category. EIOPA has indicated that the FRIA for insurance pricing AI should reflect actuarial fairness standards alongside the fundamental rights framework.
  16. EU Agency for Fundamental Rights, Handbook on European non-discrimination law, 2018 edition. Provides the rights inventory against which FRIA risk identification under Art. 27(1)(d) should be conducted. Available at fra.europa.eu.
  17. Article 46(1), Regulation (EU) 2024/1689. Narrow exemption from certain notification obligations where disclosure would jeopardise public security or life protection. Not applicable to commercial deployers in financial or insurance sectors.
  18. Article 70, Regulation (EU) 2024/1689. Designation of national competent authorities as market surveillance authorities and single points of contact. Member States must notify the Commission of their designated authorities.