Back in 2017, the NotPetya ransomware attack compelled the insurance industry to address "silent cyber" head-on, prompting the rapid expansion of the cyber insurance market. Nearly a decade on, AI liability is at a similar crossroads.

AI is transforming the way we do business, but with this rapid evolution comes a growing wave of real-world consequences. As organizations increasingly integrate AI into their operations, the question of liability becomes ever more pressing. Who's accountable when AI goes wrong? How can businesses safeguard themselves against the fallout of AI-related incidents and how will their insurance respond?

For the most part, insurance policies are still silent on how they would respond to AI-driven loss. Reputations rest on the industry's ability to innovate quickly to address gaps in cover and avoid claims disputes.

In this article, we explore the evolving AI liability landscape, the potential financial, legal and reputational fallout for businesses and how the insurance industry is likely to respond to real and potential claims scenarios.

Key takeaways

  • Most insurance policies remain silent on AI, neither explicitly covering nor excluding AI-related risks, leaving businesses exposed to gaps in insurance coverage.
  • One in five insurance professionals surveyed say their insureds have already experienced losses linked to AI risk.
  • AI liability exposures span multiple classes of businesses including cyber liability, product liability, employment practices liability (EPL), professional indemnity and Directors and Officers (D&O).
  • The insurance industry's experience addressing "silent cyber" provides a proven playbook on how to tackle emerging AI liability exposures.
  • A standalone AI liability insurance market is starting to develop, with predictions that it could grow to $4.8 billion by 2032.

Companies are continuing to take important strides in how they embed AI within the business, with 86% already seeing a positive impact on revenue, according to Gallagher's AI Adoption research. But with that opportunity come changes in exposure. Digital transformation introduces new and emerging risks that traditional insurance policies are unlikely to adequately address.

The study of more than 1,000 business leaders found that errors, misinformation and hallucinations are a critical concern. Legal and reputational risks also rank highly, followed by data protection and privacy violations, and increased vulnerability to cyberattacks and fraud.

Since the launch of generative AI in November 2022, the number of AI-related incidents involving direct financial losses over $1 million has increased steadily.1 According to the Massachusetts Institute of Technology (MIT) AI Incident Tracker, which categorizes data drawn from the AI Incident Database, the overwhelming majority of such incidents are attributable to malicious actors.2 However, the risk of losses due to AI misuse or errors is also growing.

"When you consider how much organizations are relying on AI platforms to provide critical services and products to their own clients, it creates the potential for the frequency and severity of claims to go up," says John Farley, managing director of the Cyber Liability practice at Gallagher. "You only have to look at the MIT AI Incident Tracker to see that AI incidents are already occurring, and we also know that the claims are out there."

As the risk landscape evolves, questions arise over how traditional insurance products would respond to AI-related threats.

Policy wordings weren't designed for AI liability, but in the absence of direct exclusions or affirmative coverage, many remain silent on the issue. The industry's prior experience with cyber liability claims offers a crucial playbook for improving preparedness for the next generation of tech-related risks.

When you consider how much organizations are relying on AI platforms to provide critical services and products to their own clients, it creates the potential for the frequency and severity of claims to go up.
John Farley, managing director, Cyber Liability practice, Gallagher

AI liability claims: What we learned from the insurance professionals

For the first time in 2026, Gallagher's AI Adoption research explored insurance industry perceptions of AI risks. It found that for a fifth of respondents, insureds had experienced economic losses and/or made insurance claims due to AI-related risks. Just over half of respondents that experienced losses were covered in full, while 44% were partially covered, and 3% were uninsured.

"It can be tricky to attribute losses directly to AI, rather than AI being part of a larger issue that generated the claim," observes Paige Cheasley, National Technology practice leader, Canada. "There's currently a lot of discussion about implementing clearer policy wordings to address the issue of 'silent AI' risks, but we anticipate insurers will wait to see how AI claims activity develops before introducing exclusions."

She anticipates that AI deployment will lead to additional exposures in the cyber insurance market, alongside increasing claims in the employment practices liability, professional indemnity and Directors and Officers (D&O) markets.

What is "silent AI"?

  • According to Gallagher Re, the expansion in deployment of third-party AI systems is creating a growing category of uninsured risk.
  • Silent AI exposures encompass risks associated with AI that insurance policies don't explicitly cover or exclude.
  • As happened with "silent cyber" exposures in traditional property-casualty markets, silent AI could expose policyholders to coverage gaps for AI risks and drive up accumulation risk for insurers.
  • The AI ecosystem's reliance on a small number of model providers makes systemic silent AI exposures more likely.

How AI liability scenarios could impact insurance

Gallagher's 2026 Cyber Insurance Market Outlook references over 200 active legal cases involving AI and machine learning, with issues stemming from data bias, privacy liability, discrimination and regulatory risks, which relate to a broad range of liability coverage, including cyber liability, EPL, product liability, Errors and Omissions (E&O) and D&O.

The report suggests insurers will increasingly begin offering AI liability coverage this year, but cites continuing negotiations over how "loss" is defined in these new coverage options.

Farley identifies two primary AI risk categories based on the cause of loss:

  • Threat actors using AI as a weapon (e.g., deepfakes, targeted social engineering)
  • AI not performing as intended (e.g., data bias, model drift, data poisoning)

Cyber insurers are generally affirming coverage for the first category. But the second category can spill over into non-cyber lines.

Separately, the AI technology systems behind these exposures broadly fall into two categories:

  • Generative AI systems refer to large language models, chatbots and content generation platforms. Such risks typically arise when the AI system is in use, rather than during its development or training, and can create liability exposures through the creation of harmful outputs rather than security breaches.
  • Non-generative AI, such as the traditional machine learning-driven systems used for credit scoring, hiring algorithms, medical diagnosis and autonomous vehicles, can create liabilities when they fail or can produce harmful outcomes. These failures typically emerge from issues in the system's design, training data or from gradual performance changes, rather than from external manipulation.

AI-driven property loss exposure

According to Gallagher's research, the leading classes of business where the insurance sector anticipates losses related to AI risks are cyber liability, product liability, EPL and professional indemnity, including D&O and E&O.

Notably, property ranks near the bottom of this list. But the insurance industry's own experience with cyber risk suggests that perception may lag behind reality.

Cyber incidents were once considered purely digital liability events until attacks demonstrated that they could cause significant physical damage, prompting the industry to specific coverage language.

In the case of AI, the industry is seeing the same connection of liability and property risk exposure occur. For instance, an AI-driven warehouse robot operating with a flawed routing algorithm could damage racking systems or contribute to a fire. Or an AI-managed autonomous vehicle could crash into a building.

According to Martha Bane, area executive vice president and managing director of Gallagher's Property practice, property policies currently remain largely silent on AI as a trigger, and under current definitions damage that AI caused wouldn't be categorized as a cyber loss. As a result, such scenarios would likely be treated today as resultant physical damage and, therefore, be covered.

If carriers begin to explicitly incorporate AI into property exclusions or limitations, Bane says, it will warrant close review to assess coverage intent to avoid any potential protection gaps. However, there's significant uncertainty around the likely attribution of AI-related losses, which would depend on the AI use case, the nature of the incident and the policy language involved.

Potential AI-driven liability claims

As for product liability, the MIT AI Incident Tracker shows, for example, repeated incidents of autonomous vehicles being involved in collisions with vehicles and pedestrians. How product liability coverage would respond to such a scenario depends on how the AI system is defined and if it's regarded as the product with respect to policy wordings, jurisdiction and specific exclusions.

In the employers' liability space, claims could arise from incidents where, for example, an AI tool used to vet job applications discriminates in some way, or where AI replaced job roles. For, instance, one tech company was compelled to scrap a resume screening program after it was found to discriminate against women.3

In the professional indemnity market, meanwhile, there are concerns about medical malpractice claims arising from the use of AI tools for diagnosing health issues. AI incidents could also prompt allegations that company boards and leadership failed to carry out proper due diligence prior to licensing and implementation, impacting D&O coverage.

Deliberate corruption of original AI training data before deployment is a risk that insurers are watching closely, according to Farley. "There are many coverage issues associated with data poisoning, and we're already seeing some carriers out there affirmatively covering that risk."

There's potential for other sources of financial loss relating to the use of AI. Business interruption is a key insurance coverage issue for downstream users of AI platforms, for instance. Recall of a major AI model due to errors that require retraining could cause a cascading loss for thousands of businesses relying on the platform.

For Farley, this type of loss is similar to a digital supply-chain loss or a cloud platform outage. "As with the cloud providers, AI could be considered a large systemic-type risk. When some of the cloud platforms went down briefly last year, there was nothing their clients could do except wait for them to come back up again a few hours later," he says.

Determining fault for AI risks: Whose liability is it?

Specialist AI underwriter Testudo has been tracking cumulative Gen AI-related lawsuits in the US since 2020. It found a pattern of sharply accelerating growth in litigation, suggesting legal exposure is expanding quicker than the ability of both regulatory frameworks and the insurance ecosystem to adapt.

The question of responsibility and ownership of AI models will be an increasing area of focus, as litigants seek to assign liability for losses attributable to AI use amid evolving rules and regulations. But determining whether the developer or the user of an AI tool is to blame will ultimately be a job for the courts.

Even within organizations, there's confusion about who owns the risk. Gallagher respondents are most likely to say that IT is the department with responsibility for AI-related threats, followed by senior management and the organization's risk function.

From a governance perspective, boards are striving to better understand how the rise of AI is impacting the company, its people and obligations. The role of chief AI officer is becoming more common alongside policies and procedures to address potential risks arising from third-party platforms.

Complexity of risk attribution

Shared responsibility: AI systems often involve multiple stakeholders, including developers, manufacturers and users. This makes it challenging to pinpoint liability when something goes wrong.
Black box problem: Many AI models are impenetrable black boxes. The lack of transparency can complicate the process of determining fault.

New products, exclusions and endorsements: How insurers are responding

Gallagher's research reveals concern among insurance sector respondents that most insurance policies don't explicitly address AI-related risks, with wordings that are either largely reactive in nature or seemingly written for a pre-AI world.

Most insurance sector respondents expressed concerns that wordings aren't explicit about what is be covered in an AI-related loss, potentially leaving carriers open to drawn-out and costly claims disputes.

The research also indicates that insurance businesses are anticipating new policies for AI-related risks, specialized endorsements and AI-specific wordings. There's also an expectation that AI exclusions will become more widespread, particularly as a loss history begins to build.

A small but growing number of cyber insurance carriers are adapting their products to these evolving exposures, with endorsements for AI-related professional liability, product liability, third-party liability and data poisoning risks, amongst others.

For the skeptics, a market-wide pivot towards more affirmative cover for AI risks appears unlikely to occur until there is a significant uptick in claims citing AI as a direct cause and/or increased litigation involving AI-related losses.

Having experienced the issues of silent cyber, there's a playbook on how to tackle emerging AI liability exposures and create opportunity and clarity in the process. Some insurers and reinsurers are already developing frameworks for monitoring portfolio-wide exposure to dominant AI platforms, to assess the systemic threat from inherent vulnerabilities and to calibrate their capacity accordingly.

These developments reflect not only the appetite from insurance buyers for products that address silent AI coverage gaps, but the need to track and adapt to the impact of AI on existing classes of business. For the sector, it offers opportunities to provide innovative solutions to emerging risks and to demonstrate that it is at the leading edge of rapid technological change. A standalone AI liability insurance market is starting to develop, with predictions that it could grow to $4.8 billion by 2032.4

Just as they did with silent cyber, carriers are taking steps to protect their balance sheets from a possible surge in claims in classes where policies weren't underwritten or designed with AI-related risks in mind. The news that three carriers were seeking regulatory approval for the exclusion of AI-driven losses in their professional indemnity and commercial general liability policies suggests the direction of travel in some classes of business.

"Insurers are seriously considering including clearer language around AI risks across a range of policies to avoid picking up those exposures inadvertently. However, the wordings could prove challenging given that AI is constantly evolving," says Gallagher's Cheasley.

"Exclusionary language around AI risks may be added in classes like errors and omissions," she continues. "Carriers are likely to be hesitant to be the first one out of the gate to exclude and may prefer to wait to see what kind of claims are coming in."

In commercial general liability, however, that language has already arrived. In the US, ISO (the standard-setting body within Verisk whose policy forms are widely adopted across the property casualty market) has filed multi-state generative AI exclusions to general commercial liabilty policies, with the expectation that similar language could migrate into other non‑cyber lines over time.

ISO/Verisk multi-state filings 2025: Key Generative AI exclusions

CG 40 47 exclusion excludes coverage for bodily injury, property damage and/or personal and advertising injury arising out of GenAI outputs.
CG 40 48 exclusion excludes coverage for personal and advertising injury related to GenAI, but preserves potential coverage under bodily injury or property damage in limited scenarios.
CG 35 08 Exclusion excludes coverage under Section I for bodily injury or property damage arising out of GenAI.

On the demand side, inbound inquiries are already piling up, as insurance buyers perceive the growing AI liability exposures and seek more affirmative cover. "Despite current unknowns regarding total market capacity, client demand for this coverage is actively growing. We foresee AI liability developing into a standalone business class with the potential to command a major presence in the global market," says Freddie Scarratt, deputy head of InsurTech at Gallagher Re.

It's incumbent on the insurance industry to review existing policy language and refine it, such that AI-related scenarios are explicitly addressed through affirmative coverage or clear exclusions.

In the meantime, businesses can talk to their brokers about risk mitigation strategies and insurance coverage solutions. Such requests will offer greater clarity as the market gears up to meet the silent AI challenge. Despite the challenges AI presents, the industry's long tradition of adapting to emerging risks offers a reassuring truth: Insurance will continue to evolve to protect people and businesses, just as it has through other periods of technological change.

Published May 2026


Sources

1"AI Incident Tracker: High Severity Incidents," MIT AI Risk Initiative, accessed 20 Mar 2026.

2"AI Incident Database," AI Incident Database, accessed 20 Mar 2026.

3Dastin, Jeffrey. "Insight — Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women," Reuters, 11 Oct 2018.

4"AI Insurance Could Be a $4.8B Market by 2032," Deloitte Insights, 4 Aug 2025.

5Recamara, Josh. "Major Insurers Seek Approval to Limit Liability for AI-related Claims — Report." Insurance Business, 24 Nov 2025.