LLMs, Banking, and the EU AI Act: What Can Be Expected
As AI continues to mature in commercial use, several common applications begin to unfold with respect to the needs of the sectors in adoption. With the highly quantitative nature of the industry itself, banking leads as the top adopter of AI worldwide.
However, sweeping legislation will soon become a reality for firms to quickly comply with, as the EU AI Act will impose sets of specific regulatory requirements — banking thus being one of the highest affected industries. One must note that the anticipated EU AI Act regulations will notonly apply on an industry-specific basis, but instead hold specific standards for each AI model depending on their attributes. As the council outlines Minimal, Limited, High, and Unacceptable risk categorizations, what specific implications are left to banks, and what particular actions are they advised to take?
Banking and AI Sentiment
As outlined by the Economist, a survey of IT executives found that 85% declared a “clear strategy” in the adoption of AI in developing new products and services. However, 62% of banks admitted “the complexity and risks associated with handling personal data for AI projects often outweigh the benefits to customer experience”. AI risk management is a shared concern amongst the financial sector, yet the naivety of AI itself creates uncertain expectations for proper AI risk management approaches in the field. So, how can we proceed in ensuring banking’s compliance with the Act despite the variety of uses?
Most Common AI use-cases in Banking: the rise of LLMs
It is vital to note which use-cases are in primary adoption across most banking networks, as the finance sector often targets similar applications:
- Loan Pricing Models: Employed to predict the likelihood of on-time payment based on historical data sets. Such involves Machine Learning (ML) models, including logistic regression, decision trees, random forests, and gradient boosting algorithms.
- Credit Scoring: Uses a similar framework, efficiently and immediately generating accurate credit scores with ML algorithms.
- Fraud Detection: Manifests in several forms, allowing banks to analyze deviations in behaviors without relying on a rule-based system that often yields high false positive rates. This also applies in anti-money laundering, as anomaly detection and improved name screening provide enhanced data capture nuances.
- Logical Learning Model (LLM) Customer Service Chatbots: An increasing staple for banks, as they provide immediate help to the majority of client inquiries through natural language processing of client texts. Moreover, virtual assistants may be employed in AI-powered environments, thereafter checking account balances or making transfers for clients.
- Hiring algorithms: Similar to other sectors, these are often employed to handle the mass amounts of niche staffing needs across numerous branches in enterprises. They may also aid in the recruitment process, evaluating test or interview performance.
- Trading & Investment: Used both internally and externally, analyzes a variety of data points from news articles, social media, and financial reports all at once, drawing investment conclusions based off of the tailored trading strategy.
The Relevance of the EU AI Act on Banking Use-Cases
To assess the impact of the EU AI Act, one must align the categories proposed by the Act, with the functions and capabilities of one’s current AI portfolio. Figure 1 outlines this scoring:
The difference between High and Limited Risk revolves around impact and severity, as high risk has the potential to cause grave ethical risks if misused, while limited risk typically does not have serious consequences (such as spam filters, and most fraud detection algorithms).
With regards to banking, two finance-specific use-cases have been confirmed and directly highlighted by the Act: credit scoring models and risk assessment tools (in insurance). These are both identified as High Risk systems, due to the ethical implications of determining a person’s access to financial resources. From this, it is assumed that most Loan Pricing and Hiring models will also be High Risk, due to their interaction with livelihoods and potentially holding ethical concerns if biases arise in systems.
In addition, special care should be taken in the case of LLMs due to both their widespread use and unique compliance requirements. As noted by the US Government’s Consumer Finance Protection Bureau, over 98 million people interacted with a bank’s chatbot service in 2022 (USreport, denoting 37.4% of the population). Powered by AI, these programs are susceptible to several complexities and potential risks due to the complexity and design (further details on LLMs can be found in our past blog post, Large Language Models (LLMS) in Customer SupportChatbots).
Hence, Banks using LLMs must conform to the following additional protocol and disclosures:
- “Generative foundation models should ensure transparency about the fact the content is generated by an AI system, not by humans”
- ”Train, and where applicable, design and develop the foundation model in such away as to ensure adequate safeguards against the generation of content in breachof Union law”
- “Without prejudice to Union or national or Union legislation on copyright, document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law”
While LLMs are not immediately classified as High Risk, their risk scoring should be analyzed by an AI risk governance firm, such as Calvin Risk.
At Calvin, our dynamic team’s expertise helps guide firms into technical, ethical, and regulatory AI excellence. We provide packages to aid in compliance, internal optimization of systems, assurance of bias-free algorithms, and much more, approaching AI risk management from a uniquely quantitative approach and providing actionable VaR and ROI analyses within our platform.
Interested to learn more? We look forward to making your firm compliance-ready for the EU AI Act—schedule a demo with us today!