EIOPA's August 2025 opinion does not introduce new supervisory powers. What it does is document, for the first time in supervisory guidance form, how the governance apparatus that European insurers already maintain under Solvency II maps to the obligations the EU AI Act will impose on them as deployers of high-risk AI systems. For any business seeking AI coverage, the implications reach both sides of the insurance transaction.
Key takeaways
- EIOPA's August 2025 opinion requires European insurers to govern AI under their existing Solvency II framework, not alongside it. AI is treated as a technology risk within the governance system, not a separate category.
- Insurers using AI for pricing, risk assessment, or fraud detection fall within the high-risk AI category under Annex III, point 5 of Regulation (EU) 2024/1689, and carry the full deployer obligations of Article 26 from 2 August 2026.
- EIOPA's February 2026 survey found two-thirds of European insurers using or piloting generative AI, with most deployments still at proof-of-concept stage and governance frameworks trailing adoption.
- DORA's ICT risk management rules under Article 9 and vendor oversight rules under Article 28 apply to AI systems procured from third parties, creating a three-layer stack: DORA as floor, Solvency II as governance structure, AI Act as high-risk deployer overlay.
- The compliance convergence this creates means that the documentation a business must produce to satisfy its own AI Act obligations is also the evidence an insurer will need before it can underwrite that business's AI risk.
What EIOPA's August 2025 opinion actually says
EIOPA published its "Opinion on artificial intelligence governance and risk management" in August 2025 under Article 29(1)(a) of Regulation (EU) No 1094/2010, which empowers it to issue opinions to national competent authorities on the application of Union law. The opinion is addressed to national supervisors, not directly to insurers, but its practical effect is to signal how national authorities across the EEA are expected to supervise their markets. Insurers that ignore it take a supervisory relations risk alongside the regulatory one.
The core argument of the opinion is straightforward. European insurers already maintain, under Solvency II, governance and risk management systems of considerable sophistication. Article 41 of Directive 2009/138/EC requires an effective system of governance providing for sound and prudent management of business. Article 44 requires a risk management system covering all risks the firm is exposed to, including emerging risks. EIOPA's position is that AI is an emerging technology risk and that the existing governance obligations already apply to it. Firms that have been waiting for a separate AI governance framework have, in EIOPA's view, been waiting for something that was never coming. The framework already existed.
The opinion sets out five specific expectations. First, that AI risk is integrated into the ORSA (Own Risk and Solvency Assessment) process, not appended to it. Second, that the management body takes direct responsibility for AI governance rather than delegating it entirely to technology functions. Third, that AI model validation follows the same rigour applied to internal actuarial models, with independent review, documentation of assumptions, and regular back-testing. Fourth, that data quality governance specific to AI inputs is maintained alongside general data quality controls. Fifth, that third-party AI vendors are subject to oversight equivalent to that applied to other critical outsourcing arrangements under Solvency II Article 49.
How Solvency II governance maps to AI Act obligations
The structural overlap between Solvency II and the EU AI Act is not coincidental. The drafters of the AI Act drew on existing sectoral governance models when designing the requirements for high-risk AI deployers, and the result is a set of obligations that shares architecture with the Solvency II governance system even where the terminology differs.
Article 9 of Regulation (EU) 2024/1689 requires deployers of high-risk AI systems to maintain a risk management system that identifies and analyses risks, evaluates them against defined criteria, and implements appropriate risk management measures. Article 44 of Directive 2009/138/EC requires a risk management system covering underwriting and reserving, asset-liability management, investment, liquidity and concentration risk management, operational risk, and reinsurance and other risk-mitigation techniques. AI operational risk sits within the operational risk category. A firm that has a functioning Solvency II risk management system has the structural foundation for Article 9 compliance; it must extend that structure to AI-specific risks rather than build a parallel architecture.
Article 14 of the AI Act requires that high-risk AI systems be designed and developed in a way that allows for effective human oversight by natural persons. As a deployer, an insurer inherits whatever oversight infrastructure the provider has built into the system and must staff it with persons who have the competence, training, authority, and support to exercise oversight in practice. Under Solvency II, the management body already holds formal responsibility for approving and overseeing the risk management system. The governance line from AI oversight to senior management, which Article 14 implies but does not fully specify, is already present in the Solvency II structure.
Article 17 of the AI Act requires deployers who are also providers, or who modify a high-risk system, to implement a quality management system covering design, development, validation, and post-market monitoring. Insurers who build proprietary AI models for underwriting or claims assessment are in scope. For insurers using commercially procured systems without modification, Article 17 does not apply directly, but the expectation that providers maintain a quality management system becomes relevant in vendor selection and in the due diligence record the insurer must keep under DORA Article 28.
The three specific requirements EIOPA places on AI-using insurers
Beyond the mapping exercise, EIOPA's opinion identifies three areas where it expects national supervisors to pay specific attention in their ongoing oversight of AI-using firms.
Model explainability and policyholder protection
EIOPA emphasises that AI systems used in underwriting, pricing, or claims decisions must produce outputs that can be explained to policyholders in terms they can understand and, where those decisions are adverse, that can be reviewed by a person with authority to override them. This expectation sits alongside the GDPR right not to be subject to solely automated decisions under Article 22 of Regulation (EU) 2016/679, which already applies to decisions producing legal or similarly significant effects. The EIOPA opinion adds a supervisory dimension: national authorities are expected to verify that firms have a documented explainability approach, not merely an assertion that the AI system is interpretable.
Third-party AI vendor oversight
Where an insurer procures an AI system from a third party, EIOPA expects the insurer to apply the same vendor oversight standards it applies to other critical or important operational functions under Solvency II Article 49, including prior supervisory notification where required by national law. This requirement interacts directly with DORA Article 28, which mandates that financial entities maintain a register of ICT third-party service providers and conduct risk assessments before entering critical contracts. For AI vendors, the combination of Solvency II Article 49 and DORA Article 28 creates a pre-procurement due diligence obligation that includes assessing the vendor's own AI governance, data practices, and incident response capacity.
Data quality and bias monitoring
The third area of specific supervisory attention is data governance. EIOPA's opinion requires that AI training data and operational input data be subject to documented quality controls, with particular attention to the risk that biased or unrepresentative training data produces discriminatory pricing or underwriting outcomes. This connects to EU AI Act Article 26(4), which requires deployers to ensure that input data is relevant and sufficiently representative of the system's intended purpose. For insurers, the practical implication is a data governance process that distinguishes between historical actuarial data, which may embed past discriminatory patterns, and AI training corpora, which must be reviewed specifically for representativeness before the system is put into production.
EIOPA's February 2026 survey: where European insurers stand
EIOPA published supplementary survey data in February 2026 covering the state of AI adoption across European insurance markets. The headline finding was that approximately two-thirds of European insurers were using or actively piloting generative AI in at least one business function. The most common applications were claims processing, customer service automation, fraud detection, and internal document analysis. A smaller but growing proportion of firms were piloting generative AI in underwriting support, where the high-risk AI classification under Annex III is most likely to apply.
The more significant finding was the governance maturity gap. The majority of firms using or piloting generative AI had not yet completed the integration of AI risk into their ORSA process that the EIOPA opinion requires. Many had established AI working groups or centres of excellence, but governance accountability had not reached the management body in a form consistent with Solvency II Article 41. Few firms had applied formal model validation processes, equivalent to those used for internal actuarial models, to their AI systems.
The survey also found that third-party AI vendors were predominantly not subject to the same due diligence process applied to other critical outsourcing arrangements. Contracts with AI vendors often lacked the performance monitoring, audit rights, and exit provisions that both Solvency II Article 49 and DORA Article 28 require for critical ICT service providers. As the 2 August 2026 deadline for AI Act deployer obligations approaches, firms in this position face simultaneous exposure under three regimes, not one.
What this means for businesses seeking AI coverage
The governance obligations that EIOPA's opinion places on insurers directly shape the underwriting criteria those insurers will apply when a business seeks AI coverage. An insurer that must now document its own AI risk management system, validate its own AI models, and oversee its own AI vendors will naturally build underwriting questions from those same dimensions.
A business approaching an insurer for coverage of AI agent liability, AI-driven process failure, or third-party harm caused by an autonomous system should expect to be asked for its risk record, its human oversight register, its log retention schedule, and its incident protocol. These are the documents that Article 26 of the EU AI Act requires every deployer of high-risk AI to hold. They are also the documents that a risk-aware insurer, now itself subject to the same supervisory expectations, will regard as the minimum evidence base for assessing and pricing AI coverage.
The practical consequence is that AI Act compliance documentation is no longer primarily a regulatory exercise. It is also a commercial one. A business that produces the minimum operator file described in Article 26 is also producing the evidence an insurer needs to write the risk. A business that cannot produce it is, from the insurer's perspective, an unquantifiable exposure. The connection between compliance readiness and insurability is direct, and EIOPA's opinion makes it structural rather than incidental.
For businesses that want to understand what full AI governance documentation should look like, agentcertified.eu maintains a framework for evaluating AI governance readiness against the standards that both regulators and insurers are beginning to apply in practice.
The compliance convergence: insured and insurer facing the same obligations
The most significant structural consequence of EIOPA's opinion is that it places insurers and their insured clients on the same side of a compliance obligation, rather than on opposite sides of a commercial transaction. Both are deployers of AI systems. Both are subject to the EU AI Act's high-risk deployer regime where those systems fall within Annex III. Both are now expected to maintain documented governance, oversight, and risk management frameworks for those systems.
This convergence has implications for how AI insurance products are designed and priced. Traditional insurance treats the insured as the risk-bearing party and the insurer as the risk-transferring party. In AI liability coverage, the insurer is itself subject to the same governance obligations it is assessing in its clients. A firm that has not completed its own AI governance work cannot credibly assess the governance of firms seeking coverage, and will price risk conservatively as a result.
The convergence also creates a potential alignment of interest between regulators, insurers, and insured businesses that has not previously existed in technology insurance. National supervisors who want to see AI governance standards improve across European industry can, in principle, achieve that outcome by supervising insurers' underwriting standards, rather than by supervising every AI-deploying business directly. If insurers require documented AI governance as a condition of coverage, and if regulators require that insurers themselves maintain documented AI governance, the effect is a cascading governance standard that reaches the entire market without direct supervisory intervention at every node.
Whether this cascade materialises depends on the speed at which insurers develop genuinely differentiated underwriting criteria for AI risk. The EIOPA opinion and the EU AI Act together create the regulatory conditions for that differentiation. The market has until 2 August 2026 to demonstrate whether it will use them. For an overview of the current gaps in AI agent underwriting, see the liability framework section of this publication.
Frequently asked questions
What did EIOPA's August 2025 opinion on AI governance require of insurers?
EIOPA's opinion required European insurers to apply their existing Solvency II governance and risk management frameworks to AI systems, rather than treating AI as a separate category. Specifically, EIOPA expects insurers to integrate AI risk into their governance system under Article 41 of Directive 2009/138/EC, their risk management system under Article 44, and their ORSA process. The opinion also addressed data quality, model validation, third-party AI vendor oversight, and explainability requirements for AI-driven decisions affecting policyholders.
Are European insurers classed as deployers of high-risk AI under the EU AI Act?
In most cases, yes. Annex III of Regulation (EU) 2024/1689 lists, under point 5, AI systems intended to be used for insurance pricing, risk assessment, or fraud detection in relation to insurance and financial institutions. Insurers that use such systems in professional activity fall within the definition of deployer under Article 3(4) and are subject to the full set of deployer obligations in Article 26, including human oversight, log retention, and incident reporting.
How does DORA interact with the EIOPA opinion and the EU AI Act for insurers?
DORA, Regulation (EU) 2022/2554, applies to insurers and reinsurers as financial entities from 17 January 2025. Its ICT risk management framework under Article 9 and its third-party ICT provider regime under Article 28 both reach AI systems procured from external vendors. EIOPA's opinion treats DORA compliance as a floor for AI governance, not a substitute for it. The AI Act then adds the high-risk deployer layer on top. An insurer using an externally developed AI underwriting model must manage it under DORA's vendor oversight rules, govern it under Solvency II, and comply with Article 26 of the AI Act as a deployer.
What did EIOPA's February 2026 survey find about generative AI adoption in European insurance?
EIOPA's February 2026 survey found that approximately two-thirds of European insurers were using or actively piloting generative AI, but that the majority of deployments remained at proof-of-concept stage. The survey identified a gap between adoption pace and governance maturity: firms were running pilots without the documented risk frameworks that both the EIOPA opinion and the EU AI Act require before systems are put into production.
What is the compliance convergence between insurer and insured that the EIOPA opinion creates?
The convergence arises because the same EU AI Act that imposes obligations on insurers as deployers also imposes obligations on the businesses those insurers are asked to cover. An insurer that must now demonstrate its own AI governance to its supervisor will naturally apply the same standards when underwriting AI risk for a business client. Compliance documentation that satisfies Article 26 for the insured will also serve as the underwriting evidence the insurer needs to price and bind the risk. This creates a shared compliance language between the two sides of an AI insurance transaction.
References
- EIOPA, "Opinion on artificial intelligence governance and risk management," EIOPA-BoS-25/xxx, August 2025. Published under Article 29(1)(a) of Regulation (EU) No 1094/2010 of the European Parliament and of the Council of 24 November 2010 establishing a European Supervisory Authority (European Insurance and Occupational Pensions Authority).
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 12.7.2024. Article 3(4) (definition of deployer), Article 9 (risk management system), Article 14 (human oversight), Article 17 (quality management system), Article 26 (obligations of deployers of high-risk AI systems).
- Regulation (EU) 2024/1689, Annex III, point 5: AI systems intended to be used for insurance pricing, risk assessment, or fraud detection in relation to insurance and financial institutions.
- Directive 2009/138/EC of the European Parliament and of the Council of 25 November 2009 on the taking-up and pursuit of the business of Insurance and Reinsurance (Solvency II). Article 41 (general governance requirements), Article 44 (risk management system), Article 49 (outsourcing).
- Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector (DORA). Article 9 (ICT risk management framework), Article 28 (management of ICT third-party risk). Applicable to insurers and reinsurers from 17 January 2025.
- Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). Article 22 (automated individual decision-making, including profiling).
- EIOPA, Survey on the use of artificial intelligence and machine learning in European insurance markets, February 2026. Key finding: approximately two-thirds of European insurers using or piloting generative AI; majority of deployments at proof-of-concept stage.
- Regulation (EU) No 1094/2010 of the European Parliament and of the Council of 24 November 2010 establishing a European Supervisory Authority (European Insurance and Occupational Pensions Authority), Article 29(1)(a): basis for EIOPA opinions addressed to national competent authorities.