Artificial intelligence (AI) is the hot topic of discussion at many technology conferences. Questions such as how it's evolving, how companies are using this technology and whether your firm has a policy mitigating its utilization are all very common. Unfortunately, for many organizations, the answers to these questions aren't simple, as AI is a simple concept but complex in its utility.
AI is broadly defined as a machine's ability to perform the cognitive functions we usually associate with human minds. So AI can be as complex as the capabilities of the human mind, and much, much more. AI's utility from a business perspective is unlimited and boundless in its ability to support and expand an endless array of both internal and external functions.
However, despite AI's broad-reaching benefits, many firms are slowly experimenting with it versus jumping in with both feet, as the risks associated with AI can be substantial. Some major categories of risk include the following.
- Cybersecurity: AI systems often require large amounts of data for training, which can be a target for cyber attacks and data breaches, potentially leading to sensitive information exposure.
- Bias: AI algorithms may inherit biases from the data used to maintain them, leading to unfair decisions or promoting biases for protected classes of users.
- Employee disruption: AI automation can lead to job elimination, especially for repetitive and routine tasks, potentially displacing the existing workforce.
- Regulatory compliance: Advancements in AI have outpaced existing regulations, leading to potential legal and ethical challenges for businesses in adhering to evolving compliance standards.
- Errors and omissions (E&O): AI systems can be prone to errors and, if not thoroughly tested and maintained, they may lead to misinformed results, damaging both brand and reputation.
- High implementation costs: Integrating AI technology into existing business processes can be costly, especially for smaller businesses, making the decision even more risky for C-suite executives.
Mitigating AI risk
Businesses looking to incorporate AI into their business processes must implement a robust risk-mitigation plan specific to AI utilization. Businesses should implement rigorous AI practices, conduct thorough data and system risk assessments on an ongoing basis, ensure bias is eliminated in the AI decision-making process and invest in heightened cybersecurity measures to protect sensitive data.
While most AI risks are not named perils or even defined on any current insurance policy, many risks inherent in utilizing AI are covered within several policies. Many policies, including cyber/tech E&O, employment practices liability (EPL) and Directors' and Officers' (D&O) may provide coverage should AI output or lack thereof cause a loss to a business. Examples of potential losses covered or partially covered by insurance may include the following.
- Financial losses: AI systems may make incorrect decisions or predictions.
- Reputational damage: AI systems with biases or unfair decision-making could lead to negative publicity and damage the business's reputation.
- Data breaches and cyber attacks: AI systems are potential targets for cyber attacks.
- Job displacement: Implementing AI automation could lead to job displacement, requiring severance packages or retraining programs for large groups of affected employees.
- Compliance issues: AI systems may violate regulations or legal requirements.
- System downtime: Technical issues within AI systems could cause downtime, disrupting operations leading to lost revenue for the organization's shareholders.
- Intellectual property (IP) loss: Should the proper protections not be implemented, theft or replication by competitors may occur.
To mitigate these risks, businesses should conduct thorough risk assessments, maintain a robust employee policy for sensitive data, invest in data security measures, regularly test and validate AI systems, and ensure that AI decisions are free of bias and aligned with regulatory requirements.
Insurance coverage for AI risks
Maintaining robust insurance coverage specific to the unique risks associated with AI can help protect against a wide array of AI-related risks. There are also unique insurance products available for AI engineers and marketers to support and encourage wider use of AI for automation and decision-making by transferring utilization risks to the insurance marketplace. Types of AI-insured implementations include:
- Security ransomware prevention
- Warranty liabilities
- Asset losses
- Toxic media content
- Regulatory compliance
- Credit card fraud
The exponential growth of AI utilization is inevitable. While most C-suite executives are taking a cautious stance on implementing AI within their business models, the reality is that most businesses are already using AI in some business processes, and this will continue to expand and evolve over time. McKinsey Global Institute research suggests that by 2030, AI could deliver additional global economic output of $13 trillion per year.*
Only time will tell just how the insurance industry reacts to the further development and utilization of AI — will coverage continue to expand and pick up the unique perils presented by AI, or will coverage become more restrictive, creating an entire new segment of coverage specific to AI? Disruptions, financial losses and the development of new and unique exposures created by AI utilization will further shape the industry, and Gallagher's technology practice will be at the forefront of the AI risk frontier — guiding our clients' evolutionary risk and protecting their financial success.