--------- Website Scaling Desktop - Max --------- -------- Website Highlight Effekt ---------

Growing Cyber Threats and the Importance of Robustness

With the UK’s National Cyber Security Centre (NCSC) citing that AI will substantially increase cyber threats over the next two years, large attention has shifted in recent months towards the risks of these threats impacting current systems. With the complex nature of AI models inciting new vulnerabilities in both execution and susceptibility, how will industries safeguard their data and secure model use, and will these new tenants significantly impact firm strategy?

Cybersecurity and AI Susceptibility as a Growing Concern

Cybersecurity threats and incidents have non negligible growth factors, especially within their intersection of AI. Apart from the execution of cybersecurity concerns, such as in AI enhancing threat actors’ capabilities stemming from inappropriate use of AI, the susceptibility of AI systems themselves has been on a rise. In particular, outward-facing systems, chiefly LLM models, fall victim to several forms of novel security risks.

Resultantly, unique forms of attack have been developed, and must be heeded as technical teams safeguard their AI portfolios. In the case of LLMs, as regarded in our previous blog post,the following may be siege one’s AI portfolio:

- Prompt Injections, where specific prompts can manipulate LLM chatbots and bypass established filters

- Data Leakage, where an LLM could inadvertently disclose sensitive information within their responses, endangering client confidentiality and corporate data security

- Inadequate Sandboxing, where an insufficient separation of the LLM environment from sensitive systems may lead to heightened vulnerabilities - highlighting the importance of robust system architecture

- Server-side Request Forgery, where an LLM may be manipulated into performing unanticipated requests or accessing restricted resources, leading to potential data breaches or system interruptions

- Insufficient Access Controls, where a weak implementation of access controls or authentication mechanisms could expose the system to unauthorized access, heightening security risks.

- Training Data Poisoning, which involves the potential manipulation of training data to embed vulnerabilities or biases that could distort the LLM's responses or behavior - thereby negatively impacting its efficiency and efficacy.

The Increasing Case of Incidents in the Light of Robustness Issues

Robustness - referring to the model's performance under different input conditions, resilience against shifted data distributions, and security against input manipulation - takes into consideration both the processing capabilities and the underlying data which compose the model. Oftentimes, firms may face robustness issues (and frequently, be unaware of the issue) due to the lack of sufficient and acceptable training data, alongside adequate model specifications. As a result, such low robustness may give way to malicious attacks exploiting such lapses, leading to the aforementioned methods of breaching security.

Consequently, firms must look to adopt a method of AI risk management in order to evaluate the level of their AI portfolios’ robustness on a continuous basis, as through softwares such as Calvin. Operatively, firms must also look to

(1) Maintaining System Security by enforcing strict access controls, limiting the LLM access to data of authorized users only,

(2) Protecting Personal Information by providing LLMs access only on a need-to know-basis in context and not included in the training data, and

(3) Promoting Respectful Digital Communication by implementing robust and sanitized content filtering mechanisms.

Cybersecurity and the EU AI Act

Cybersecurity not only holds significance as an internal governance and safety measure, but also as an upcoming regulatory requirement. In the nearing EU AI Act (EUAIA), specific provisions have been enacted to safeguard the union’s use of AI as a whole. Precisely stated by the Act:

“High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity, in the light of their intended purpose and in accordance with the generally acknowledged state of the art.”

The EUAIA continues to elaborate on the importance of cybersecurity within one’s AI safety requirements, mentioning needed security controls and appropriate underlying ICT infrastructure. The Act additionally states, “...[AI model] providers should ensure an adequate level of cybersecurity protection for the model and its physical infrastructure, if appropriate, along the entire model lifecycle. Cybersecurity protection related to systemic risks associated with malicious use of or attacks should duly consider accidental model leakage, unsanctioned releases, circumvention of safety measures, and defence against cyberattacks, unauthorised access or model theft”.

Thus, these notes must be achieved in accordance with the requirements established by the EUAIA, which are equivalently accepted as the cybersecurity standards outlined in Article 10 and Annex I of the Regulation 2022/0272.

How firms can super-defense their AI models against attacks

At Calvin, we developed the assessment tools to tackle the much-needed robustness necessities across the commercial use of AI, ensuring your business reaps the maximum benefits from these advanced technologies. Evaluating model robustness against abnormal adversarial input, our team’s wealth of experience in implementing and managing AI security and efficiency allows for us to help you navigate the complexities of AI assessment and security with effectiveness.

Interested to start your AI Risk Management journey? Let’s unlock AI excellence together, book a demo with us today!

Authors

Shelby Carter

Business Development Intern

Upgrade AI risk management today!

request a demo

Subscribe to our
monthly newsletter.

Join our
awesome team
today!

e-mail us