25 questions. Five obligation categories. One percentage. Know where you stand before 2 August 2026. Free, anonymous, nothing leaves your browser.
100 days to 2 August 2026Article 9 requires providers to establish a risk management system. Article 26 requires deployers to verify that inputs are appropriate and to monitor use. These questions assess whether your organisation has the foundations in place.
Has your organisation identified all high-risk AI systems it currently deploys or intends to deploy before 2 August 2026? Art. 26, Annex III
Has a formal risk assessment been conducted for each high-risk AI deployment, documented in writing? Art. 9, Art. 26
Does your organisation verify that input data fed to high-risk AI systems is relevant and sufficiently representative of the intended use? Art. 26(4)
Are risk management activities reviewed and updated when the deployment context changes materially? Art. 9(4)
Has responsibility for AI risk management been assigned to a named function or individual with adequate authority and resources? Art. 26
Article 14 defines five functional capabilities that must be built into high-risk systems. Article 26(2) requires deployers to assign oversight to natural persons with competence, authority, and adequate support.
Are the persons assigned to human oversight of AI systems identified by name or role in writing? Art. 26(2)
Do oversight personnel have documented competence (training, qualifications, or experience) to understand AI system outputs and intervene effectively? Art. 26(2), Art. 14(4)
Do oversight personnel have the authority to stop, override, or disregard an AI system output without requiring senior approval? Art. 14(4)(e)
Is there a defined escalation path when the AI system outputs a result that the oversight person cannot validate or is uncertain about? Art. 14(4)
Is human oversight practically exercised before any AI-assisted decision with legal or similarly significant effect on a natural person is finalised? Art. 14(2), Art. 26(2)
Article 26(6) requires deployers to retain automatically generated logs for the period fixed by applicable law. These questions assess whether your documentation and log infrastructure is ready for an enforcement inquiry.
Are automatically generated logs from your high-risk AI systems retained for at least the period required by Union or national law (minimum 6 months where no specific period is prescribed)? Art. 26(6)
Does your organisation hold the technical documentation or information sheet provided by the AI system provider in accordance with Article 13? Art. 13, Art. 26(1)
Is there an internal AI deployment record that captures: system name, provider, deployment date, intended purpose, and the oversight person assigned? Art. 26
Can your organisation produce a complete log of AI-assisted decisions on a specific natural person within 5 working days if requested by a supervisory authority or affected individual? Art. 26(6), Art. 86 GDPR
Where a FRIA is required under Article 27, has it been completed and filed before the system went into service? Art. 27
Article 13 requires providers to design systems so deployers can inform natural persons that they are subject to AI decisions. Article 26 obliges deployers to pass that information on. These questions test your disclosure posture.
Are affected persons informed, before or at the time of an AI-assisted decision, that the decision was supported or produced by an AI system? Art. 13(1), Art. 26
Can an affected person request and receive a meaningful explanation of how an AI-assisted decision was reached? Art. 13(3)(b), Art. 86 GDPR
Is the intended purpose of each high-risk AI system disclosed to users and affected persons in plain language? Art. 13(3)(a)
Has your organisation reviewed its privacy notices and terms of service to ensure AI processing is disclosed consistently with GDPR obligations? Art. 13 AI Act, Art. 13/14 GDPR
Where the AI system operates in a language other than the default language of affected persons, are disclosures provided in the relevant language? Art. 13(1)
Article 26(5) requires deployers to notify providers and, where required, national supervisory authorities of serious incidents. These questions assess whether your incident and monitoring posture is operationally ready.
Does your organisation have a written procedure for detecting and escalating serious incidents involving high-risk AI systems? Art. 26(5)
Is the relevant national market surveillance or supervisory authority identified, and is there a process for notifying them of serious incidents within the required timeframe? Art. 26(5), Art. 73
Is the performance and accuracy of each high-risk AI system monitored on an ongoing basis after deployment? Art. 26(5)
Is there a mechanism for internally reporting AI performance concerns or potential malfunctions by any staff member who interacts with the system? Art. 26(5)
Has your organisation communicated AI incident reporting obligations to the provider, and confirmed contractually that the provider will notify you of incidents on their side? Art. 25, Art. 26(5)
The 2 August 2026 deadline is approaching. Operators who begin their documentation now have time to iterate before enforcement applies. The 90-day FRIA countdown article provides a week-by-week action plan, and the FRIA Generator tool can produce a draft document in under 15 minutes.