--------- Website Scaling Desktop - Max --------- -------- Website Highlight Effekt ---------

Solvency II, FINMA, and the EU AI Act: An Intersection of AI Risk Management


In the methodologically detailed world of banking and insurance, regulation stands at a forefront.

Holding some of the most robust governance and validation frameworks, this necessity stems from a number of macro-level risks, such as credit and cyber risk, which arise from a plentitude of interrelated factors, from market-wide to idiosyncratic results, contributing to the stability of the firms in their entirety.

Such is accompanied by the stability effects of financial institutions’ qualitative byproducts. With trustworthiness and adherence to unbiasedness, coupled with the proven quantitative measures and industry-wide tests, the industry directly lends itself to the adoption of international legislations such as FINMA, Solvency II, and now, with increasing necessity, the EU Artificial Intelligence Act.

In this blog post, we will evidently delve into how the EU AI Act will progress itself in the banking and insurance industries under the current technical doctrines - with requirements both appending and transitioning to accommodate the requisites of AI.

The EU AI Act: An overview of coming risk management

The EU AI Act aims to address the unique risks arising from AI models, which holds novel risks not present in former algorithmic models. From the complexity and opacity of AI models (their “black-box” nature) leading to concerns of Explainability, to the inherent dynamic learning of models and resource intensity, firms must not only adjust their validation focus, but also the overarching governance plans of managing AI successfully.

The EU AI Act looks to guide this notion, encouraging the adequate address of these emerging risks. Outlined in further detail in our EU AI Act Preparation article, the basis of the financial sector’s lies in the following published by Dr. Laura Caroli, Senior Policy Advisor to the EU Parliament:

- Banking and insurance firms are specifically noted to carry out a fundamental rights impact assessment prior to putting high-risk AI (as defined in Annex III) into use;

- Moreover, AI risk assessments and pricing models for health and life insurance were emphasized as crucial in preventing financial exclusion and discrimination;

- However, AI systems deployed to detect fraud and for prudential purposes, such as determining the capital requirements of credit institutions and insurance companies, will not be classified as high-risk;

- Thus, EU AI Act requirements for banking and insurance AI systems will be integrated into the existing supervisory mechanisms, such as Solvency II, with firms having to adhere to internal governance and risk management rules as outlined;

- The draft specifies that in order to avoid overlaps, “limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on deployers of high-risk AI systems,” hence relieving coinciding regulations.

Solvency II and the EUAIA

EIOPA, the European Insurance and Occupational Pensions Authority, had notably published a statement regarding the EU AI Act and its effects on the insurance industry. Noting the heightened requirement needs for high-risk use-cases, including creditworthiness assessments for banks and pricing & risk assessments for insurers, EIOPA refers to the jurisdiction of European standardization bodies and national competent authorities (NCAs) who, alongside the European Commission’s new AI Office, will enforce and oversee the effects of the Act on the financial industry.

In particular, the NCAs are expected to help facilitate the convergence of legislation. The Act itself in fact refers to Solvency II, stating that “union legislation on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions…including when they make use of AI systems”. Affirming these statements by the act, EIOPA also notes that the International Association of Insurance Supervisors (IAIS) expects to further promote convergence with the development of an AI application paper in 2024.


In FINMA’s Risk Monitor 2023, the longer-term theme of AI risk management has been noted to be a driver of risk attention. Observing the trends in the financial industry with strategically set objectives from 2021 to 2024, FINMA reflects on the need for “clear roles and responsibilities and risk management processes [to] be defined and implemented.”

The report reflected on FINMA's overarching risk categories of focus, including Governance & Responsibility, Robustness & Reliability, Transparency & Explicability, and Non-discrimintation. In particular, the lack of in-house expertise for enterprises was noted, with many companies outsourcing their governance and risk management without fully understanding and applying the effects of such results.

The importance of AI risk assessments is a forefront for FINMA, aligning with the upcoming requirements of the EU AI Act. As such, the report notes, “When developing, training, and using AI, institutions need to ensure that the results are sufficiently accurate, robust, and reliable. Both the data and the models as well as the results need to be open to critical questioning.”

Preparing for AI portfolio success

At Calvin Risk, we enable firms to not only organize AI inventories, process risk assessments, and carry out validation, but to take hold of their risk management through explainable, quantitative results that promote the values of Solvency II, FINMA, and the EU AI Act’s core values. Our on-prem installment option allows for full control of the governance process, ensuring that AI remains an ownership of the firm itself and supported by the financial expertise of our specialized team.

Through our risk assessment and validation platform, we guide companies in gaining a complete, unbiased perspective of the quantitative effects and status of AI portfolios for all stakeholders of a firm. We are proud to thus promote business value alongside our regulatory tennant by allowing firms to streamline their governance system as a whole - enabling trustworthy AI, faster.


Shelby Carter

Business Development Intern

Upgrade AI risk management today!

request a demo

Subscribe to our
monthly newsletter.

Join our
awesome team

e-mail us