Author: John Farley

Artificial intelligence (AI) systems, while offering significant benefits, can also lead to harm if they malfunction or are improperly developed. As organizations increasingly integrate AI into business operations, it's crucial to ensure that appropriate plans are in place to address the unique risks associated with AI technologies.
While many organizations have formal incident response plans (IRPs) in place for computer or data security incidents such as privacy breaches or cyber attacks, a number of AI-related harms — from data bias and discrimination to broader societal risks and harm to the environment — fall outside the scope of traditional security IRPs. The pervasiveness and unique nature of AI harms create the need for an additional resource: an AI IRP.
This bulletin outlines the importance of an AI IRP to reflect today's AI-related cyber threats and how to help mitigate these risks.
Key AI risks to consider in incident response plans
Individual harms
AI systems can infringe upon the civil rights of individuals or compromise their safety when improperly developed or when used in unintended ways, creating potential liability issues for developers and deployers.
Key consideration: Maintain extensive documentation on decision-making in model development to comply with transparency requirements of applicable laws. The IRP should contain reporting guidelines and details on pulling technical readouts related to specific decisions the AI model made and providing those readouts to the appropriate legal teams at the organization for analysis and reporting.
Group harms/algorithmic discrimination
AI systems can inadvertently perpetuate biases, leading to discriminatory outcomes. This discrimination can result in claims related to unfair treatment or violation of anti-discrimination laws.
Key consideration: It's essential to have protocols in place for ongoing monitoring to quickly identify and rectify such biases. Keep extensive documentation related to the algorithm's design and decision-making processes to pinpoint sources of bias, including information on the data used in the training phase of the model development lifecycle and documentation on testing.
Clearly outline written protocols for both AI developers and AI deployers. AI deployers should also retain documentation on user training and education regarding the purpose and limitations of the AI model.
Regulatory risk
An AI-related incident can trigger regulatory compliance obligations. Compliance with applicable laws may require timely notification to regulatory authorities. Failure to comply could result in legal costs related to regulatory investigations, which can also lead to fines and settlements.
Key consideration: Your IRP should address notification requirements for each jurisdiction in which you operate. It should include the communication plan and the roles and responsibilities of all stakeholders who may need to engage in regulatory investigations. It should be continually updated because regulations are quickly evolving.
Regulators may require extensive technical documentation on the development and use of the model and evidence of an AI impact assessment and/or AI conformity assessment. Maintaining such documentation is not only the best practice, but also a requirement in many jurisdictions. Keep track of incident details and corrective actions taken to address any AI incidents.
Product and professional liability
An AI system that fails or produces incorrect outputs associated with products and services may lead to a variety of product liability and professional liability claims, such as claims for third-party bodily injury, property damage and contractual liability related to faulty products and the failure to render agreed-upon services.
Key consideration: Ensure that your IRP includes steps for immediate investigation, product recall procedures and additional actions to help mitigate potential damage, including procedures to identify and remediate the cause of algorithmic errors.
Privacy liability
AI systems often handle vast amounts of personal data, increasing the risk of privacy breaches and wrongful data claims.
Key consideration: Your IRP should include measures for rapid detecting breaches and notifying affected parties, as required by applicable privacy laws. The IRP should include the engagement of legal experts to understand the specific notification requirements and compliance obligations under state, federal and international laws.
Copyright infringement
AI systems that generate content may inadvertently infringe on existing copyrights, leading to copyright and trademark infringement claims and litigation.
Key consideration: It's important to have procedures for reviewing AI-generated content and addressing any infringement issues.
- Conduct a detailed analysis of the AI-generated content to identify specific instances of copyright infringement.
- Examine the algorithm's processes to determine how copyrighted material was used or replicated without authorization.
- Outline communication protocols with affected parties, such as copyright holders and AI platform providers, to acknowledge the issue and outline initial steps being taken.
Medical malpractice
In healthcare settings, AI systems used for diagnostics or treatment recommendations may provide incorrect information. Patient safety may be jeopardized, leading to medical malpractice claims on a large scale, as multiple patients may rely on the same erroneous medical advice.
Key consideration: Your IRP should ensure that AI systems and incident response protocols comply with healthcare regulations and standards. The protocol may include immediately halting use of the AI system involved to prevent further harm and isolating the affected AI system from other systems to prevent cascading effects.
Security and operational risks
AI systems often fall outside traditional cybersecurity protocols and are susceptible to unique risks such as hallucinations, data poisoning, filter bubbles, adversarial machine learning attacks and AI-induced data leaks.
Key consideration: Certain security and privacy risks may dovetail with the traditional IRP, but specific reporting and management processes should be documented in the AI IRP. These processes may include pausing certain key features of the AI model or halting its use entirely via a "kill switch" measure that should be in place to prevent undue harm. Thoroughly document key decision-making processes for incident escalation and handling.
Supply chain risk
Deployers of AI systems are often critically dependent on various components and associated vendors of the AI technology stack, including vendors from the foundation models, AI compute and model deployment vendors, databases vendors or data processors, automation agents, and monitoring and security vendors focused on AI. Should breaches or outages occur with any component of the AI supply chain, organizations should be ready with a plan.
Key consideration: IRPs should include communication protocols with vendors and suppliers in the AI supply chain. Communication protocols are a regulatory requirement in certain jurisdictions. Maintain contact info for AI providers, importers, deployers and distributors as part of the IRP. Additionally, develop contingency plans in the event of an outage.
Additional considerations in developing an AI incident response plan
Like security IRPs, AI IRPs should be broad enough to address the wide variety of AI-specific risks and be routinely reviewed and updated to account for the rapidly evolving AI risk landscape. Today's AI risks may be adequately addressed in your existing IRP, but it may soon be outdated due to the speed at which AI technology is developing.
Retain extensive technical documentation about the development of the AI model and the various components of the AI technology stack.
Determine a dedicated AI incident response team early in the process, because an AI incident may be managed very differently from a traditional security incident. Thoroughly document requirements for reporting to AI-specific regulators and applicable organizations in the AI supply chain.
Resources such as the NIST AI Risk Management Framework can help inform an AI IRP with suggested actions to prepare for and respond to AI incidents.* Establishing protocols for ongoing monitoring, reporting and post-incident learning can help organizations better manage AI risk.
As with traditional IRPs, plans should routinely be reviewed and tested. AI incident tabletop scenarios should be conducted at least annually for AI developers and deployers alike.
Leveraging insurance
Many Cyber insurance policies offer free or discounted cyber risk management services that include help with IRP planning. Policyholders should leverage these resources to improve how they address AI-based risk management.
Cyber insurance and other insurance policies may provide coverage for organizations impacted by claims related to the use of emerging technology, including AI. They may cover various claims arising from specific cyber incidents, cyber attacks or alleged wrongful collection and/or sharing of information — either directly or indirectly through AI. Many Cyber insurance policies provide access to crisis services, including breach coaches, IT forensics investigators and several other breach response experts. Policyholders should be mindful of claim reporting obligations, requirements to use insurance panel breach response vendors, evidence preservation and issues that may impact attorney-client privilege.