Back in 2017, the NotPetya ransomware attack compelled the insurance industry to address "silent cyber" head-on, prompting the rapid expansion of the cyber insurance market. Nearly a decade on, AI liability is at a similar crossroads.
AI is transforming the way we do business, but with this rapid evolution comes a growing wave of real-world consequences. As organizations increasingly integrate AI into their operations, the question of liability becomes ever more pressing. Who's accountable when AI goes wrong? How can businesses safeguard themselves against the fallout of AI-related incidents and how will their insurance respond?
For the most part, insurance policies are still silent on how they would respond to AI-driven loss. Reputations rest on the industry's ability to innovate quickly to address gaps in cover and avoid claims disputes.
In this article, we explore the evolving AI liability landscape, the potential financial, legal and reputational fallout for businesses and how the insurance industry is likely to respond to real and potential claims scenarios.
Key takeaways
- Most insurance policies remain silent on AI, neither explicitly covering nor excluding AI-related risks, leaving businesses exposed to gaps in insurance coverage.
- One in five insurance professionals surveyed say their insureds have already experienced losses linked to AI risk.
- AI liability exposures span multiple classes of businesses including cyber liability, product liability, employment practices liability (EPL), professional indemnity and Directors and Officers (D&O).
- The insurance industry's experience addressing "silent cyber" provides a proven playbook on how to tackle emerging AI liability exposures.
- A standalone AI liability insurance market is starting to develop, with predictions that it could grow to $4.8 billion by 2032.
Companies are continuing to take important strides in how they embed AI within the business, with 86% already seeing a positive impact on revenue, according to Gallagher's AI Adoption research. But with that opportunity come changes in exposure. Digital transformation introduces new and emerging risks that traditional insurance policies are unlikely to adequately address.
The study of more than 1,000 business leaders found that errors, misinformation and hallucinations are a critical concern. Legal and reputational risks also rank highly, followed by data protection and privacy violations, and increased vulnerability to cyberattacks and fraud.
Since the launch of generative AI in November 2022, the number of AI-related incidents involving direct financial losses over $1 million has increased steadily.1 According to the Massachusetts Institute of Technology (MIT) AI Incident Tracker, which categorizes data drawn from the AI Incident Database, the overwhelming majority of such incidents are attributable to malicious actors.2 However, the risk of losses due to AI misuse or errors is also growing.
"When you consider how much organizations are relying on AI platforms to provide critical services and products to their own clients, it creates the potential for the frequency and severity of claims to go up," says John Farley, managing director of the Cyber Liability practice at Gallagher. "You only have to look at the MIT AI Incident Tracker to see that AI incidents are already occurring, and we also know that the claims are out there."
As the risk landscape evolves, questions arise over how traditional insurance products would respond to AI-related threats.
Policy wordings weren't designed for AI liability, but in the absence of direct exclusions or affirmative coverage, many remain silent on the issue. The industry's prior experience with cyber liability claims offers a crucial playbook for improving preparedness for the next generation of tech-related risks.
AI liability claims: What we learned from the insurance professionals
For the first time in 2026, Gallagher's AI Adoption research explored insurance industry perceptions of AI risks. It found that for a fifth of respondents, insureds had experienced economic losses and/or made insurance claims due to AI-related risks. Just over half of respondents that experienced losses were covered in full, while 44% were partially covered, and 3% were uninsured.