Article 27 of the EU AI Act imposes a pre-deployment assessment obligation on public bodies, providers of public services, and certain private deployers using high-risk systems for creditworthiness or insurance risk scoring. The assessment covers seven defined elements, must be filed before first deployment, and is the first document national market surveillance authorities will request in an enforcement inquiry.
Key takeaways
- The FRIA obligation applies to public bodies using any Annex III system, to private operators providing public services, and to any deployer using systems listed in Annex III points 5(b) or 5(c) (creditworthiness assessment and insurance risk scoring by non-regulated entities).
- The FRIA must be completed before first deployment and updated whenever a material change is made to the system's purpose, scope, or operational context. The obligation applies from 2 August 2026 with no grandfathering for existing deployments.
- Article 27(1) sets out seven required contents: process description and conditions of use, period and frequency of use, categories of persons affected, specific risks to fundamental rights, human oversight measures, mitigation plan, and complaint and redress mechanism.
- Where a DPIA under Article 35 of Regulation (EU) 2016/679 is also required, Article 27(4) permits a single joint document, provided that all elements required by each instrument are present.
- The FRIA is a supervisor-facing document. Article 27(3) requires deployers to notify the relevant market surveillance authority upon completion, and the document must be made available on request. Several national data protection authorities have confirmed it will be the first item requested in an inquiry.
What Article 27 is
Article 27 of Regulation (EU) 2024/1689 is titled "Fundamental rights impact assessment for certain high-risk AI systems." It sits within Chapter III, Section 3, which deals with the obligations of deployers. Its purpose is to require certain categories of deployer to conduct a structured assessment of the impact their use of a high-risk AI system may have on the fundamental rights of the persons the system will affect, before that system is deployed.
The assessment is not a risk register in the general sense. It is a written instrument addressing a specific and narrower question: what are the concrete effects on fundamental rights, and what has the deployer done about them? The text draws on the tradition of impact assessment instruments already established in Union law, most directly the Data Protection Impact Assessment under Article 35 of Regulation (EU) 2016/679, but extends beyond data protection to the full scope of fundamental rights recognised in the Charter of Fundamental Rights of the European Union.
The FRIA is not a conformity assessment. That obligation sits with the provider under Article 43. The FRIA is a deployer's own assessment of the specific context in which the system will be used and the particular risks that context creates. A provider may deliver a system that has passed conformity assessment without deficiency, and the deployer is still required to complete its own FRIA before deployment if it falls within the categories set out in Article 27(1).
Who must complete a FRIA
Article 27(1) defines the obligation by reference to both the category of deployer and the category of system. The structure is as follows.
| Deployer category | System category | Statutory basis |
|---|---|---|
| Public bodies and Union institutions, agencies, or offices | Any high-risk AI system listed in Annex III | Article 27(1), first limb |
| Private operators providing public services (utilities, public transport, essential services) | Any high-risk AI system listed in Annex III | Article 27(1), first limb |
| Any deployer, public or private | Systems listed in Annex III point 5(b): creditworthiness assessment by entities not subject to financial regulation | Article 27(1), second limb |
| Any deployer, public or private | Systems listed in Annex III point 5(c): risk assessment and pricing for life insurance and health insurance | Article 27(1), second limb |
The practical consequence is that the FRIA obligation is considerably broader for public bodies than for private deployers. A local authority using an AI system to process social benefit applications, allocate housing, or evaluate planning submissions falls within the first limb for each of those deployments. A private company using a creditworthiness scoring model that sits outside the regulated financial sector falls within the second limb even though it is a private operator.
The term "providers of public services" in the first limb is not defined with precision in the text of Regulation (EU) 2024/1689. Recital 48 indicates that it is intended to capture operators that exercise public functions or that hold a position of structural power relative to the persons affected, even where those operators are formally private. National market surveillance authorities are expected to take a broad reading of this category, consistent with the general approach taken to public service obligations in Union sector-specific law.
The seven required contents of a FRIA
Article 27(1)(a) through (g) sets out an exhaustive list of what the FRIA must address. The seven elements are as follows.
- Process description and conditions of use (Article 27(1)(a)). A description of the processes in which the high-risk AI system will be used in line with its intended purpose, and the purpose, conditions, and parameters of deployment. This is not a reproduction of the provider's instructions for use. It is the deployer's own account of the specific operational context: which decisions the system will inform, which processes it is embedded in, and what parameters the deployer has set.
- Period and frequency of use (Article 27(1)(b)). A description of the time period in which the system is to be used and, where relevant, the frequency with which it will be applied to individual persons. Where a system makes continuous determinations (for example, an automated credit decision engine processing applications in real time), the frequency element requires particular attention.
- Categories of persons and groups affected (Article 27(1)(c)). A description of the categories of natural persons and groups of persons likely to be affected by the use of the system in the specific deployment context. This is not a generic description of the population at large. It is a specific account of who will be subject to the system's outputs in this particular deployment, including any subgroups that may be disproportionately affected by reason of age, disability, ethnic or racial origin, gender, socioeconomic status, or other characteristics relevant to the system's function.
- Specific risks to fundamental rights (Article 27(1)(d)). The specific risks of harm that may have an impact on the fundamental rights of the persons identified, including the risks to the rights of the persons concerned. The Charter of Fundamental Rights of the European Union is the reference instrument. Rights commonly engaged by high-risk AI systems include the right to human dignity (Article 1), the prohibition of discrimination (Article 21), the right to data protection (Article 8), the right to a fair trial and effective remedy (Article 47), and the right to an effective remedy where an automated decision significantly affects a person's legal position.
- Human oversight measures (Article 27(1)(e)). A description of the implementation of human oversight measures, pursuant to Article 26(2). This element connects the FRIA directly to the broader deployer obligations. The deployer must describe not only that human oversight is in place, but what it consists of: which persons are responsible, what competence they hold, what authority they have to override the system, and at what threshold they are expected to intervene.
- Mitigation plan (Article 27(1)(f)). The measures taken and envisaged to address the risks identified, including information about the arrangements put in place to allow individuals to file a complaint or seek a remedy. If the deployer has identified a risk but has not yet fully addressed it, the mitigation plan must describe the steps in progress and the timeline for completion.
- Complaint and redress mechanism (Article 27(1)(g)). A description of the mechanisms by which affected persons can seek a remedy or submit a complaint. This requirement is distinct from the general complaint handling obligations that may arise under sectoral law. It is a specific description of the channel through which a person who believes their rights have been affected by the system's output can raise that concern, and the process the deployer will follow in response.
When a FRIA must be completed and updated
Article 27(1) specifies that the FRIA must be conducted before the deployer puts the high-risk AI system into service. The obligation is pre-deployment. It is not satisfied by completing the document after deployment begins or by treating the assessment as a post-hoc review.
The obligation to update the FRIA is implied by the structure of Article 27 read alongside Article 26(5), which imposes a continuing duty to monitor operation of the system. Specifically, the FRIA must be revised when the deployer makes a material change to the purpose, scope, frequency, or population affected by the system. A change in the nature of the decisions the system informs, an extension of the system to a new category of persons, a change in the source data feeding the system, or a significant change in the operational parameters all constitute material changes that trigger a review obligation.
The timeline is firm. High-risk AI systems under Annex III fall within the deployer obligations that enter application on 2 August 2026. There is no grandfathering provision. A system already in service on that date must have a FRIA in place on that date. Deployers with existing systems should treat the deadline as fixed.
The DPIA and FRIA relationship
Many deployers subject to the Article 27 FRIA obligation will also be subject to the data protection impact assessment obligation under Article 35 of Regulation (EU) 2016/679. The two instruments address related but distinct questions. A DPIA asks whether the processing presents a high risk to the rights and freedoms of natural persons in the data protection sense. A FRIA asks, more broadly, what the specific risks to all relevant fundamental rights are in the context of the particular deployment.
Article 27(4) provides that where a DPIA carried out under Article 35 of the GDPR is required, the FRIA under Article 27 may be conducted jointly with that assessment. The resulting document may combine both, provided that all elements required under each instrument are addressed. The practical consequence is that deployers who already maintain a DPIA programme can extend it to incorporate the FRIA elements, rather than building a parallel document from scratch.
| Dimension | DPIA (GDPR Article 35) | FRIA (AI Act Article 27) |
|---|---|---|
| Legal basis | Regulation (EU) 2016/679 | Regulation (EU) 2024/1689 |
| Trigger | Processing likely to result in high risk to data subjects | Deployer category and Annex III system type |
| Scope of rights addressed | Data protection rights | All fundamental rights (Charter of Fundamental Rights) |
| Human oversight element | Not required as a discrete element | Required under Article 27(1)(e) |
| Complaint mechanism | Implied by data subject rights under Chapter III GDPR | Explicit requirement under Article 27(1)(g) |
| Joint document permitted | Yes, under Article 27(4) AI Act | Yes, under Article 27(4) AI Act |
Deployers choosing to produce a joint document should ensure that the elements specific to each instrument are clearly identified within the document. A supervisor conducting an inquiry under the AI Act will check for the Article 27 elements, and a data protection authority conducting a GDPR inquiry will check for the Article 35 elements. A document that satisfies one instrument but lacks elements required by the other will be treated as non-compliant with the instrument whose elements are absent.
The notification duty under Article 27(3)
Article 27(3) requires deployers to notify the competent national market surveillance authority upon completion of the FRIA. The notification duty is not a submission of the document for approval. It is an indication to the supervisor that a FRIA has been completed and is available for inspection. The document itself must be retained and produced on request.
The identity of the relevant market surveillance authority depends on the sector and the member state. For most high-risk AI systems, the market surveillance authority will be the national body designated under Article 70. In sectors where a sectoral authority already exercises oversight (financial services, healthcare, employment), that authority may serve as the designated market surveillance authority for AI Act purposes in that sector. Deployers operating across multiple member states should identify the relevant authority in each jurisdiction in which the system is used.
Several national data protection authorities have publicly indicated that they expect to receive FRIA notifications at volume from August 2026 and that the FRIA will be the first document requested in any inquiry into the use of a high-risk AI system involving personal data. Deployers should treat the notification obligation as a hard deadline, not an administrative formality.
Enforcement exposure under Article 99
Failure to complete a FRIA where Article 27 requires one, or producing a FRIA that does not address all seven required elements, falls within the second tier of Article 99. The ceiling for second-tier infringements is EUR 15 million or 3 per cent of worldwide annual turnover, whichever is higher. This is the same ceiling that applies to breaches of the general deployer obligations under Article 26.
The notification failure under Article 27(3), that is, failing to notify the supervisor upon completion, is a separate infringement. Depending on the supervisor's reading of its enforcement discretion, a deployer who completes a FRIA but fails to notify may face a lesser sanction than one who fails to complete the assessment at all. However, Article 99 does not create a formal hierarchy between the two failures, and supervisors retain full discretion within the statutory ceiling.
Article 99(6) instructs supervisors to take into account the size and economic viability of the operator when setting the level of a fine, with specific reference to SMEs and startups. This is not an exemption. It is a calibration instruction. A small deployer that genuinely cannot complete a compliant FRIA by 2 August 2026 is better served by engaging the supervisor proactively than by attempting to present an incomplete document as compliant.
The minimum FRIA file: a practical template structure
The following structure maps directly to the seven elements required by Article 27(1). Each section should be completed with specificity to the actual deployment context. Generic descriptions that could apply to any deployment of the system will not satisfy the obligation.
- Cover sheet. System name, provider name, version, intended purpose as stated in the provider's instructions for use, deployer name, date of assessment, name and position of the person responsible for the assessment.
- Process description (Article 27(1)(a)). A narrative account of the process in which the system is embedded, the decisions it informs, the parameters set by the deployer, and the conditions under which it will operate. This section should be written in terms of the specific deployment, not the generic product specification.
- Period and frequency (Article 27(1)(b)). The start date of deployment, the expected duration, the anticipated frequency of individual-level decisions (daily, per transaction, per application), and any scheduled review points.
- Affected categories (Article 27(1)(c)). A description of who will be subject to the system's outputs, including any identifiable subgroups. Where the deployer has conducted demographic analysis of the population likely to be affected, the results should be summarised here.
- Fundamental rights risk analysis (Article 27(1)(d)). An assessment of each relevant fundamental right, whether the system's operation in this context creates a risk of harm to that right, the severity and likelihood of the harm, and whether any systemic risk of discriminatory outcome has been evaluated.
- Human oversight description (Article 27(1)(e)). Named roles responsible for oversight, their documented competence and training, their authority to override the system, the threshold at which intervention is expected, and the escalation path for edge cases. This section should cross-reference the oversight register required under Article 26(2).
- Mitigation measures and complaint mechanism (Articles 27(1)(f) and 27(1)(g)). Measures already in place to address identified risks, measures planned but not yet implemented with a completion timeline, and a description of the channel through which affected persons can submit a complaint or seek a remedy, including the response process and expected timelines.
- DPIA cross-reference (Article 27(4)). Where a DPIA has also been conducted, a note of the DPIA reference and date, and a statement that the two documents are designed to be read together as a joint assessment.
- Review log. A record of all subsequent reviews of the FRIA, including the date, the reason for the review, and any changes made to the document or to the deployment in response.
Related reading
For the broader framework of deployer obligations within which the FRIA sits, see EU AI Act operator obligations: a 2026 compliance guide. For the human oversight design requirement that informs the content of Article 27(1)(e), see EU AI Act Article 14: the human oversight design requirement explained. For the documentation architecture that supports the minimum operator file, see how to document AI agent risk management for EU compliance. For the enforcement architecture that will process FRIA notifications from August 2026, see EU AI Act enforcement: the AI Office and national supervisors explained.
Frequently asked questions
What is a Fundamental Rights Impact Assessment?
A Fundamental Rights Impact Assessment (FRIA) is a written document required by Article 27 of Regulation (EU) 2024/1689. It describes the deployer's process for using a high-risk AI system, identifies the categories of persons likely to be affected, assesses the specific risks of harm to those persons, describes the governance and human oversight in place, and sets out the mitigation plan if those risks materialise. It must be completed before first deployment and updated on any material change.
Who must complete a FRIA under the EU AI Act?
Article 27(1) requires a FRIA from three categories of deployer: public bodies and Union institutions using any high-risk AI system listed in Annex III; private operators providing public services (such as utilities or public transport) using Annex III systems; and any deployer, public or private, using a high-risk system listed in Annex III points 5(b) or 5(c), which cover creditworthiness assessment by non-regulated entities and risk assessment for life and health insurance.
Is a FRIA the same as a DPIA?
No, but they overlap. A Data Protection Impact Assessment (DPIA) under Article 35 of Regulation (EU) 2016/679 is triggered by processing likely to result in a high risk to natural persons. A FRIA under Article 27 of the AI Act is triggered by the category of deployer and the type of Annex III system. Article 27(4) explicitly provides that where both are required, they may be conducted jointly, and the resulting document may combine both assessments, provided all the required elements of each are present.
When does the Article 27 obligation start?
The obligation to complete a FRIA applies from 2 August 2026, when Chapter III of Regulation (EU) 2024/1689 enters into application for high-risk AI systems. For systems already in service on that date, the FRIA must be in place by 2 August 2026. There is no grandfathering provision for existing deployments. The FRIA must also be updated whenever the deployer makes a material change to the purpose, scope, or operational context of the system.
What must a FRIA contain?
Article 27(1) sets out seven required elements: (a) a description of the deployer's processes and the purpose and conditions of use of the system; (b) a description of the period and frequency of use; (c) the categories of natural persons and groups likely to be affected in the specific deployment context; (d) the specific risks of harm that may have an impact on fundamental rights, including the rights of the persons concerned; (e) a description of the implementation of human oversight measures; (f) the measures taken to address the identified risks, including information on any mitigation plan; and (g) a description of the complaint and redress mechanism available to persons affected by the system.
References
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 12.7.2024.
- Article 27, Regulation (EU) 2024/1689, fundamental rights impact assessment for certain high-risk AI systems.
- Article 27(1)(a)-(g), Regulation (EU) 2024/1689, required contents of the fundamental rights impact assessment.
- Article 27(3), Regulation (EU) 2024/1689, notification of the market surveillance authority on completion of the assessment.
- Article 27(4), Regulation (EU) 2024/1689, joint conduct of the FRIA and the DPIA where both are required.
- Annex III, points 5(b) and 5(c), Regulation (EU) 2024/1689, creditworthiness assessment and life and health insurance risk assessment systems triggering the FRIA obligation for all deployers.
- Article 26(2), Regulation (EU) 2024/1689, human oversight obligation for deployers, cross-referenced in Article 27(1)(e).
- Article 99, Regulation (EU) 2024/1689, penalties, second tier (up to EUR 15 million or 3 per cent of worldwide annual turnover).
- Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data (General Data Protection Regulation), OJ L 119, 4.5.2016.
- Article 35, Regulation (EU) 2016/679, data protection impact assessment.
- Charter of Fundamental Rights of the European Union, OJ C 326, 26.10.2012.
- Directive (EU) 2024/2853 of the European Parliament and of the Council on liability for defective products, OJ L, 18.11.2024.
- Recital 48, Regulation (EU) 2024/1689, on the scope of "providers of public services" for the purposes of Article 27.