As generative artificial intelligence rapidly enters the workplace, a new wave of risk is emerging rooted in everyday human error and outdated insurance policies, not just malicious hackers.
Getting your Trinity Audio player ready...

Author: Paige Cheasley

null

Beyond external threats, organizations now face risks created by how employees use generative artificial intelligence (AI) in routine tasks. While organizations are rightly focused on privacy, intellectual property and output quality, many are overlooking a simpler threat: human error. Employees are already using AI tools for drafting emails, analyzing trends and supporting decisions. But what happens when someone uploads confidential data into a public model or acts on incorrect AI-generated insights?

34% of business leaders worry about AI hallucinations, 33% about data and privacy risk and 31% about legal accountability.

The Gallagher 2025 AI benchmark report found that 34% of business leaders are concerned about AI hallucinations, when generative AI tools produce information that appears convincing but is factually incorrect or entirely fabricated. Another 33% are focused on data and privacy risk, while 31% worry about legal accountability. These findings highlight a growing awareness that technology itself isn't the only issue. It's how people interact with it.

These moments of human error create liability gaps that don't always fit neatly into existing insurance policies. When a mistake happens, who's responsible? How is the risk transferred? Are organizations as protected as they believe?

Insurance blind spots in the age of AI

Most organizations assume they're covered, but traditional insurance policies weren't built for AI.

  • Technology Errors and Omissions (Tech E&O) insurance is typically designed to protect against defects in proprietary software. How would the coverage respond to losses caused by the misuse of a publicly available AI tool?
  • Cyber policies are designed for security breaches, data exfiltration and ransomware attacks. They may not cover liabilities arising from an employee relying on flawed AI-generated information.
  • Directors and Officers (D&O) policies can cover leadership decisions that result in business loss or litigation, but only in specific circumstances such as wrongful acts, mismanagement or breach of fiduciary duty, and often after other coverages are exhausted.

Clear contractual frameworks are critical in the context of AI deployment. Organizations must differentiate between developing proprietary AI systems and integrating third-party tools, as the legal and operational risks vary significantly. A company deploying a third-party tool should secure contracts that limit liability for the model's outputs, as well as service-level agreements that define how data is used, stored and deleted. The vendor should also provide clear disclaimers about accuracy, bias and retraining. Without these, insurers may push back on a claim, and an organization may be exposed.

Real world cases of AI liability

In late 2023, a company's use of an AI-powered chatbot made headlines when a passenger relied on it for information about flight refunds. The chatbot incorrectly responded that a refund could be requested after booking a full-price ticket and submitting documentation later. Based on this interaction, the customer proceeded with the booking process. When the refund was denied, the customer challenged the decision in court. A small claims tribunal ruled in favour of the passenger, stating that the company was responsible for the information provided by its chatbot. The tribunal emphasized that companies cannot distance themselves from digital tools they deploy to communicate with customers. In this case, the chatbot's misinformation had the same weight as an employee.

This incident illustrates how AI miscommunication can create tangible legal and financial liability. While the company's chatbot was likely intended to streamline customer service, it ultimately exposed the company to public scrutiny, operational confusion, reputational damage and a binding legal decision. Whether the financial fallout was covered under their Cyber, General Liability or E&O insurance remain unclear. What's clear is that this type of risk isn't theoretical; it's already happening.

In another example, automated resume screening tools have shown a tendency to filter out minority candidates, raising discrimination concerns. In these cases, there's no malicious intent, just a flawed algorithm and a lack of human oversight. Nonetheless, the liability still falls on the hiring organization.

For risk managers and business owners, these cases raise several critical questions. Who's monitoring the accuracy of your AI tools? Have your contracts been updated to reflect the limitations of generative models? And perhaps most importantly, do your current insurance policies account for liabilities arising from human error, AI misuse or misleading automated communication?

What responsible AI looks like in practice

Gallagher has emphasized that responsible AI adoption requires strong governance, which having a designated leader for AI risk, whether that's chief AI officer, an ethics officer, or another trusted role. This leader should oversee procurement, vendor due diligence, model testing and post-deployment monitoring.

Canadian companies can look to the federal government's Directive on Automated Decision-Making (DADM) and the Artificial Intelligence and Data Act (AIDA) legislation as starting points. Both encourage the use of impact assessments, documentation of decision logic and ongoing review of high-risk models.

But governance also includes training. Employees need to know what tools are approved, what data they can and cannot enter and how to evaluate the reliability of AI-generated content. AI policies should be as clear and actionable as your cybersecurity policies.

Effective governance also requires equipping employees with the knowledge to use AI responsibly. Staff should be trained on which tools are approved, what data they can share and how to assess the reliability of AI-generated outputs. Clear, enforceable AI policies should be integrated into existing governance frameworks similar to cybersecurity protocols.

Finally, organizations must revisit vendor contracts to ensure they align with their risk management strategies. When using external AI tools, contracts should include clear clauses on indemnities, liability caps, data retention, audit rights and ownership of outputs. These clauses aren't just legal formalities; they're essential safeguards in a rapidly evolving AI landscape.

Rethinking insurance as a strategic tool

While traditional insurance wasn't designed with AI in mind, it's beginning to evolve. The key is to approach coverage strategically. Risk managers should ask themselves:

  • What happens if an employee shares sensitive client data with a generative AI platform?
  • What if your chatbot misleads customers and damages your brand?
  • What if an AI decision tool filters out job applicants based on flawed logic?

Tech E&O, Cyber and D&O policies need to be reviewed with these types of risks in mind. In some cases, organizations may need to seek AI-specific endorsements or standalone policies that directly address issues like model accuracy, third-party liability, or reputational harm.

Gallagher's AI research shows that reputational damage from AI misuse is often excluded from standard insurance coverage. However, insurers may be more open to tailoring policies when organizations demonstrate strong governance, including training employees, documenting decision processes and maintaining clear incident response plans.

Moving forward with governance and coverage

AI will only grow more powerful, but power without guardrails creates risk. The most resilient organizations will be the ones that combine strong governance, clear contracts and purpose-built insurance. To move in that direction, Canadian businesses should start by auditing how their teams use AI, where employees interacting with AI and what data is being entered.

From there, a careful review of your contracts is required. Ensure liability clauses aren't only current but also aligned with the specific risks associated with generative AI. Standard language may no longer be sufficient in addressing issues such as data misuse, misinformation or third-party tool failures.

Next, assess the maturity of internal training programs. Do employees understand how to engage with AI systems responsibly? Are there defined protocols for validating outputs or reporting anomalies?

Finally, collaborate with an insurance advisor to evaluate whether the current coverage accounts for the evolving role of AI in your operations. Insurance policies should reflect both present-day usage and anticipated applications, not just the broader technology footprint.

Gallagher helps organizations navigate emerging risks with confidence. If your team is adopting AI or considering new tools, we can support you across every dimension, governance, policy design, contractual risk and insurance alignment.
Reach out to our team today to build an AI strategy that balances innovation with protection.

Author Information