--------- Website Scaling Desktop - Max --------- -------- Website Highlight Effekt ---------

Insuring Tomorrow: Navigating the EU AI Act's Latest Twists and Turns in the Insurance Industry

Entering into 2024, AI has continued to sustain a significant impact on the intricacies of efficiency and opportunity in firm competitiveness. In particular, the Insurance Industry has experienced substantial movement of AI throughout operations: from the enhancement of current processes, to the creation of novel offers.

As the EU Artificial Intelligence Act (EUAIA) nears closer to its roll-out, so does the need for firms to prepare for its compliance requirements — especially for those with such direct AI impact on human life as in insurance. With unique requirements applied to each employed model based on its level of risk and attributes, what specific implications are left to insurance providers, and what particular actions are they advised to take?

Insurance and AI Sentiment

AI touts extremely beneficial results thus far for insurance, leading to its immense scaling in the past year. As reported by Forbes, insurance companies that have implemented AI in their systems have found that “claims accuracy improved by up to 99.99%”, alongside operational efficiency and customer experience improvements of 60% and 95%, respectively.

Moreover, IoT sensors and the introduction of LLM capabilities have been seen to advance the insurance field’s use of AI. Proliferated by the fact that over 1 trillion devices are expected to be connected by 2025, as estimated by McKinsey, AI will ultimately unlock new product offerings with its enhanced capabilities. These new products, such as Swiss Re’s Flight Delay Compensation, and as McKinsey notes, the new insurance risk assessments of 3D-printed buildings/additive manufacturing, will become a prevalent point for insurance banks as they adjust to new demands and offer increasingly competitive deals for clients.

So, how can we proceed in ensuring Insurance’s compliance with the Act despite the variety of uses?

Most Common AI Use-cases in Insurance: The Rise of Generative AI

It is vital to note which use-cases are in primary adoption across most insurance networks, as the sector hosts a variety of both unique and shared cases:

- Claims (new products): ML can be utilized in the large consumption of data to propagate new products. For example, Swiss Re’s Flight Delay Compensation with its pricing engine able to rate adjust (based on data from over 90k flights per day).

- Claims (individual assessment): Includes IoT and computer vision assessments, such as analyzing the driving style and vehicle dynamics in the period surrounding a car crash to identify responsibility and due claims adjustments.

- Pricing and underwriting: From the digitalization of existing touch points to new assets with digital partners (telematics, remote sensors, satellite images or digital wellness records) to access data more simultaneously. In addition, supervised learning can complement underwriting processes, such as in smart triaging and routing.

- Conversational AI: NLP chatbots and voice assistants that remain available to clients 24/7, while also maintaining a consistent quality of experience.

- Fraud detection: ML and computer vision to analyze documents and images related to a claim, identifying potential cases of fraud. Alongside the ability to distinguish computer-manipulated images and documents, it can review photos online to see if the claim was submitted after a customer had sold items.

- Content categorization: Revolves around Optical Character Recognition (OCR) and the ability of AI to read, interpret and categorize unstructured data from incoming letters, emails, forms, and excel sheets.

The Relevance of the EU AI Act on Insurance Use-Cases

To assess the impact of the EU AI Act, one must align the categories proposed by the Act with the functions and capabilities of one’s current AI portfolio. Figure 1 outlines the current overarching scoring:

EU AI Act Risk Levels

Ensuring models which fall under high-risk will be of primary concern to Insurers, especially as the recent trilogues have confirmed that models which influence the financial aspects of individuals' lives, like credit scoring and insurance assessment algorithms, are likely to be classified as high-risk under the new regulation.

As a result, many insurance AI models will have to conform to the following subsidiary requirements: General Requirements (applicable to all stakeholders), Provider Requirements, and Deployer/Importer Requirements (user-type dependent). In our previous blog post, we have highlighted the general expectations, as well as process of evaluation during the Evaluation of AI Risk Level phase of compliance.

With respect to insurance’s specificities, published by Dr. Laura Caroli, Senior Policy Advisor to the EU Parliament, the final stage of the EUAIA unveils the following:

1. To ensure the protection of fundamental rights, insurance firms are specified to carry out a fundamental rights impact assessment prior to putting it into use.

2. AI risk assessments and pricing models for health and life insurance were specifically highlighted for the significance of ensuring financial exclusion and discrimination are not prevalent.

3. However, AI systems used for the purpose of detecting fraud and for prudential purposes (credit institutions’ and insurances’ undertakings’ capital requirements) will not be considered as high-risk.

4. EUAIA requirements for insurance AI systems will be integrated into the existing supervisory mechanisms, such as Solvency II, and firms will have to adhere to internal governance and risk management rules as outlined.

5. However, in order to avoid overlaps, the draft states that “limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on deployers of high-risk AI systems,” hence relieving coinciding regulations.

As many models lie in a ‘grey’ space amongst the specified use-cases identified by the EUAIA, the appropriate risk scoring and required assessments should be analyzed by an AI risk governance firm, such as Calvin Risk.

At Calvin, we pride ourselves in guiding firms towards technical, ethical, and regulatory AI excellence. To aid in compliance, assurance of bias-free algorithms, efficiency of firm AI portfolios and much more, we approach AI risk management from a quantitative approach to ensure your models are both compliant and optimized across life cycles.

Interested to learn more? We look forward to continuing in guiding insurers to be compliance-ready for the EU AI Act — schedule a demo with us today!

Authors

Shelby Carter

Business Development Intern

Upgrade AI risk management today!

request a demo

Subscribe to our
monthly newsletter.

Join our
awesome team
today!

e-mail us