--------- Website Scaling Desktop - Max --------- -------- Website Highlight Effekt ---------

Preparing for the EU AI Act: Concrete Steps to Take in 2024

Friday, December 8th, 2023 marked the crucial confirmation for Trustworthy AI in the European Union; with the European Union Artificial Intelligence Act (EUAIA) reaching its final state through the concluding trilogues, several points of question and ambiguity have been rounded off and confirmed for application across the EU. With the European Commission, Council, and Parliament reaching this monumental agreement, many firms are left wondering: “What does this mean for me?”

Thus, Calvin is proud to release its step-by-step macro guide on preparing your firm for the EUAIA in 2024!

Step 1: Creating an AI Inventory

For the success of the EUAIA, companies must initiate a structured process to account for all AI models and use-cases. This includes cataloging all AI systems, collecting their functionalities, organizing training data, assigning roles, and producing relevant documentation for each point. Such can be done using tools such as the Calvin Software — which can streamline your AI portfolio creation.

Step 2: Expert Evaluation of AI risk level (EUAIA Categorization)

The EUAIA employs a risk-based approach, organizing obligations and reporting requirements into the following risk 'buckets' to which each use case must be assigned:

Prohibited risk

AI systems whose usage is deemed unacceptable, as they violate Union values and fundamental rights. These prohibitions include practices with a substantial potential to manipulate individuals through subliminal techniques, extending beyond their consciousness. Additionally, they address the exploitation of vulnerabilities in specific groups, such as children or persons with disabilities, aiming to materially distort their behavior and potentially cause psychological or physical harm to them or others.

High-risk

Such systems refer to models holding significant severity and scope on human activity, including both private and public functions. The recent trilogue discussions have confirmed the Finance and Insurance sector's obligation to fulfill, among other requirements, a mandatory assessment of how their use cases impact fundamental rights. Models that influence the financial aspects of individuals' lives, like credit scoring and insurance assessment algorithms, are likely to be classified as high-risk under the new regulation.

Hence, the high-risk AI requirements are categorized into three classes: General Requirements (applicable to all), Provider Requirements, and Deployer/Importer Requirements (user-type dependant).

General Requirements include the following:

- Adequate AI risk and quality management systems

- Effective data governance/quality of the datasets feeding the system to minimize risks and discriminatory outcomes

- Logging of activity to ensure traceability of results

- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess compliance

- Clear and adequate information to the user

- Appropriate human oversight measures to minimize risk

- Compliance with EUAIA standards of robustness, security and accuracy

- Registering high-risk AI systems on the EU database prior to deployment

Provider Requirements encompass the following additional obligations and post-market obligations:

- A conformity assessment to examine whether the requirements laid out above have been met

- Maintaining logs generated by relevant high-risk systems for a minimum 6-month period

- Monitoring and reporting performance and safety of AI systems throughout their lifetime with continuous active conformity to the EUAIA

For Deployers/Importers of AI actively using high-risk systems in their firms, the following is additionally required:

- Completing relevant assessments (e.g., fundamental rights impact assessments (FRIA), and data protection impact assessments (thus conforming with GDPR))

- Implementing human oversight and ensuring input data is relevant to system use

- Incident management, including mitigation and reporting

- Ensuring all relevant documentation is evidenced and compliant with the EUAIA

Minimal Risk

The residual category as defined by the EU, wherein beyond the initial assessment and transparency obligations, holds no additional obligations. These AI models have very limited scope and severity of risk with little associated human impact, however employers of such AI are encouraged by the EU to voluntarily commit to codes of conduct

General Purpose AI

It is important to note that within the recent Trilogue, the subcategory of General Purpose AI (GPAI) had also been finalized: additional transparency requirements are needed for these models, which namely encompass generative AI functions with ‘infinite’ outputs (i.e., they can be assumed by any industry as their function is not definitive, such as LLM chatbot models). Currently, there is limited information on the criteria for classifying GPAI models as high impact. The EU set the initial quantitative threshold for the amount of computing used in training at Floating Point Operations per Second (FLOPs) > 10~25. The EU recognised the possibility of needed to update the FLOPs threshold by the time the AIA becomes applicable and has granted the Commission the authority to do so.

Specifically, GPAI models with high-risk are subject to the following additional requirements:

- Model evaluations, assessments and mitigation of systemic risks

- Fulfillment of adversarial testing

- Reports to the Commission on serious incidents

- Ensuring cybersecurity and reporting on their energy efficiency.

- Lastly, until harmonized EU standards are published, GPAIs with systemic risk may have to rely on codes of practice to comply with the regulation

Step 3: Building a Strategic Plan to tackle your firm’s unique risk profile and Solidifying AI Risk Management with investments in AI Governance

Thereafter, a firm should identify its strategic plan relative to the risk profile at hand; companies holding a strong percentage of high-risk AI are ought to invest in an AI Risk Management and Governance suite — whether internally or through external third-parties (such as Calvin Risk). Through this, models can be continuously assessed and available for review, avoiding unintended violations similar to the over 1000+ penalties, to date, with the GDPR.

Step 4: Preparing for EUAIA Audit

Following the “active” management of one’s AI portfolio (as aforementioned in Step 3), the firm can then turn to preparing for EUAIA audits. Typically, this process can become quite cumbersome; several personnel, coupled with lengthy time allocation to the process (and revisions) may be needed with the auditor or auditing body. Due to the considerably manual nature of the operation, technological optimization in this step can be of substantial economic benefit.

Thus, Calvin offers an AI Management System (EMS) to propel this process, allowing for the efficient upload of evidence pieces to each AI model profile, tracking of the level of audit approval, and digitalizing the entire revision process.

Step 5: Internal Review and Auditing

After managing the relevant documentation for each item in the AI Portfolio, such will be remitted to the respective authorities (including the new European AI Office) and audit bodies for each country. For high-risk systems, stand-alone AI will be registered in an EU database. Moreover, a declaration of conformity needs to be signed, with systems thus bearing the CE (Conformité Européenne) compliance marking.

While the specificities of remitting are expected to unveil in 2024, firms need to ensure that their compliance holds an all-encompassing standard across their inventory; depending on the level of severity, noncompliance costs range from €7.5 million or 1.5% of turnover to €35 million or 7% of global turnover (whichever is higher, respective to each threshold).

Step 6: Discussion of compliance and risk at the project outset, evaluating ROI and VaR

An optional, yet highly encouraged, final step is to evaluate the efficiency and quantified level of risk held within the AI portfolio. With Calvin’s regularly updating ROI and VaR for the entire portfolio, managers can evaluate the most (and least) profitable models, refining the use-cases based on their risk profiles.

Resultantly, compliance is coupled with the optimization of your portfolio, with the process becoming not an expense, but an opportunity.

So, are you ready to tackle your AI quality & compliance ahead of the competition? Schedule a demo call with us today!

Authors

Shelby Carter

Business Development Intern

Gian Lorenzo Esposito

Business Development Intern

Upgrade AI risk management today!

request a demo

Subscribe to our
monthly newsletter.

Join our
awesome team
today!

e-mail us