--------- Website Scaling Desktop - Max --------- -------- Website Highlight Effekt ---------

The Status of AI Regulation in the UK and US: A 2023 Review

As AI continues to gain traction around the world, the EU AI Act famously takes the lead as the forerunner of legislative structure in Responsible AI developments. With that said, the United States and the United Kingdom — two of the greatest AI-adoptive states — have yet to officially declare similar bills, despite the US having the largest AI market share and the UK holding the largest number of AI startups in Europe. However, given several recent developments in both countries, one can expect these potentialities to become a not-so-distant reality in the next few months.

The Current Status of AI in the US and the UK

With an estimated AI market share of $90.4 billion*, the US ranks 1st in the world for AI Scale, Talent, Infrastructure, Research, Development, and Commercial opportunities. Regarded by many to be the hub of AI innovation and development, with firms such as OpenAI and Nvidia trailblazing alongside industry leaders such as Google and Microsoft, the innovative sector has permeated the US tech culture and remains a forefront. In terms of the climate of AI regulation acceptance, the EIU, the research and analysis division of the Economist, states that the “US has always favored innovation over regulation, and would prefer if the market introduced its own self-regulatory principles”, a concept strengthened by the AI race it holds with China and enforced by state-level jurisdictions. Thus, only periphery codification has been fostered, with little directive measures.

With regards to the UK, the country has thus amassed a large sector of AI market share as the leader within Europe, ranking 1st in Europe and 4th worldwide on an aggregated spectrum. One may argue its proximity to the EU has inspired recent developments in AI regulatory efforts, however the country has yet to officially put forth conclusive measures. Sara Cardell, CEO of the Competition and Markets Authority of the UK (CMA), emphasized the importance of proactive intervention, stating that “the speed at which AI is becoming part of everyday life for people and businesses is dramatic. There is real potential for this technology to turbocharge productivity and make millions of everyday tasks easier – but we can’t take a positive future for granted”.

* = With reference to the IDC AI Spending Guide’s gross world spending, multiplied by the 2022 Globe Newswire market share for the United States

The UK, Developing Regulations, and the AI Safety Summit

Among the largest developments in recent AI legislation is the UK’s AI Safety Summit, taking place on 1st and 2nd November at Bletchley Park and focusing on risks created — or exasperated — by AI systems, and the dangers that may follow. Furthermore, the UK government set forth 5 specific objectives of the event, bringing together key countries with leading technology organizations and academia experts. These intents include:

1. A shared understanding of the risks posed by frontier AI and the need for action;

2. A forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks;

3. Appropriate measures which individual organizations should take to increase frontier AI safety;

4. Areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance;

5. Showcase how ensuring the safe development of AI will enable AI to be used for good globally.

As the UK sets the stage for conclusive action in the development of legislation, one can expect regulations to soon follow the conference. Adjacent to the meeting, the CMA set forth its own principles for Responsible AI development. In its September 2023 report, it outlines principles in the following key areas:

- Accountability: Developers and deployers are accountable for the outputs provided to consumers;

- Access: Ensuring ongoing access to essential inputs without unnecessary restrictions;

- Diversity: Encouraging a sustained diversity of business models, including both open and closed approaches;

- Choice: Providing businesses with sufficient choices to determine how to utilize [Foundation Models] (FMs) effectively;

- Flexibility: Allowing the flexibility to switch between or use multiple FMs as needed;

- Fairness: Prohibiting anti-competitive conduct, including self-preferencing, tying, or bundling;

- Transparency: Offering consumers and businesses information about the risks and limitations of FM-generated content to enable informed choices.

A Potential Start to AI regulation in the US: Bipartisan Support

As aforementioned, the US may very well assume a court-based adoption of legislation to combat AI pitfalls. However, by assuming this structure, severe incidents may be curbed too late. As a result, bipartisan support has found its place in a recent bill introduced by Senate Minority Whip John Thune (R-SD) and Senator Amy Klobuchar (D-MN). As reported by Politico, the bill will “require companies to assess the impact of artificial intelligence systems and self-certify the safety of systems seen as particularly risky”. Differing from a previously “heavy-handed” bill that was in debate, this arrangement prides itself in its moderate balance. In addition, support has begun to stem from enterprise leaders, with Ryan Hagemann, AI policy executive at IBM, calling it “the most comprehensive piece of legislation that is out there right now.”

Alongside this, the Whitehouse recently secured 8 additional AI safety commitments from leading AI firms (Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability). These commitments reinforce the US’s approach of the “self-regulated market”, with firms being incentivized through stakeholder intervention rather than definitive regulations. This agreement revolves around reinforcing AI safety, security, and trust, with the member firms committing to:

1. Ensuring products are safe before introduction, alongside help of independent experts;

2. Building systems with security as a top priority through strong safeguards and third-party implementation;

3. Earning the public’s trust through robust technical mechanisms and disclosures, reducing the risks of fraud and deception.

Currently, the US has set forth NIST’s Trustworthy & Responsible Artificial Intelligence Resource Center and the Blueprint of the AI Bill of Rights, both steps towards AI regulation but only in a voluntary, advisorial manner.

So, what does this mean for AI Safety?

In light of these developments, especially in the coming months as the UK’s AI Safety Summit unfolds and the US continues in bipartisan discussion on the next steps of AI Risk Management, Calvin Risk stands as a beacon of reliable regulatory solutions. With our expertise in preparing firms for EU AI Act compliance, we look forward to extrapolating our offer into the US and UK markets — helping to guide enterprises towards excellence in the Responsible AI space through our quantitative platform that assumes a holistic method of use-case evaluation for your AI portfolio.

Interested in kickstarting your compliance ahead of the competition while also optimizing your ROI? Book a demo with us today!

Authors

Shelby Carter

Business Development Intern

Upgrade AI risk management today!

request a demo

Subscribe to our
monthly newsletter.

Join our
awesome team
today!

e-mail us