Insurance

Guy Carpenter Warns That Adoption of AI Brings New Risks to Cyber ​​Aggregates

Guy Carpenter Warns That Adoption of AI Brings New Risks to Cyber ​​Aggregates

Reinsurance

By Kenneth Araullo



As the adoption of AI accelerates, the methods of its deployment are evolving, bringing new dynamics and increasing the potential for cyber event clustering risks.

These risks, according to a recent report by Guy Carpenter, arise from malicious or accidental sources.

The report, part of a broader analysis by Guy Carpenter, confirms that AI poses an additional threat to the software supply chain. For companies using third-party AI solutions, such as ChatGPT or Claude, deploying AI models within a customer’s network or through external hosting can introduce significant risks.

Get the latest reinsurance news straight to your inbox twice a week. Sign up here

A breach or degradation of a third-party AI model could lead to outages or vulnerabilities, making the AI ​​vendor a single point of failure for all customers using that model. The report cites several instances, including multiple ChatGPT outages in 2023 and 2024, as well as the PyTorch machine learning library breach in December 2022, as examples of such vulnerabilities.

Jay Carpenter explains in more detail the new attack surfaces introduced by AI deployment. Once deployed, AI models interact with users, making them vulnerable to manipulation. Techniques such as “jailbreaking,” where models are tricked into behaving outside their intended parameters, can lead to data disclosure, network breaches, or other significant consequences.

The report also points to incidents such as a February 2024 vulnerability in an open-source library that led to ChatGPT interactions being hijacked and Air Canada paying damages due to incorrect information provided by an AI chatbot.

The report also identifies data privacy as a critical area of ​​concern. AI models often require large, sensitive datasets for training, requiring data to be exposed, duplicated, or available to training pipelines. The success of AI technology is closely tied to the amount of data it can access, creating strong incentives to aggregate data.

However, this centralization of data also increases the risks of aggregation. Jay Carpenter points to an incident in September 2023 in which AI researchers accidentally exposed 38 terabytes of customer data due to misconfiguration, illustrating the potential impact of such risks.

In cybersecurity, AI is increasingly being used in security operations, including response coordination. While AI can provide rapid, automated responses to threats, Jay Carpenter warns of the risks associated with granting AI systems elevated privileges.

The report cites a recent outage at Microsoft, which may have been exacerbated by the company’s automated response to a malicious DDoS attack, as a cautionary example. It emphasizes the need for guardrails, controls, and human oversight in AI-driven security responses to avoid unintended consequences.

Looking ahead, Guy Carpenter suggests that as companies continue to integrate AI into their operations, the insurance and reinsurance sectors should view this trend as an opportunity for growth rather than a risk to be avoided.

The report draws parallels between the current wave of AI adoption and the move to cloud services a decade ago, noting that while both present significant opportunities, they also introduce new dependencies on third-party service providers. By evaluating AI deployment through the lens of third-party risk, companies can better assess and manage the potential challenges associated with this transformative technology.

What do you think of this story? Feel free to share your comments below.

Get the latest reinsurance news straight to your inbox twice a week. Sign up here


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker