Getting your Trinity Audio player ready...

Author: Joey Sylvester

null

The emergence of artificial intelligence's increasing integration into business processes underscores the need for rigorous, structured evaluation processes to manage associated risks effectively. Article 27 of the EU AI Act mandates comprehensive AI Impact Assessments for high-risk AI deployments.1 In the EU, this includes deployments involving biometrics, critical infrastructure, education and employment, among others.2 While largely voluntary for AI deployments in the US, it is still good practice to conduct such an assessment for a variety of reasons. This is supported by the NIST AI Risk Management Framework3 and guidance from the federal government.4

The use of an impact assessment may help safeguard individuals against potential harm and strategically position organizations to better manage risks and possible liabilities, protecting organizational reputation and enhancing long-term resilience.

Why AI impact assessments matter

AI Impact Assessments can document many crucial factors in the AI development lifecycle. This may include identifying, evaluating and mitigating AI risk. These assessments generally include crucial areas such as:

  • System details, including intended use and limitations
  • Geographic areas and applicable laws
  • Identification of key stakeholders that may be impacted by the AI
  • Identification of potential harms and mitigation strategies
  • Alignment of security, privacy and transparency of the AI to internal policy requirements
  • Accountability measures and oversight mechanisms

By addressing these components, organizations can be better equipped to document compliance with regulatory requirements, ethical standards and stakeholder expectations.

Risk management

Impact assessments can directly protect individuals by uncovering and mitigating biases, fairness issues, potential privacy issues, security threats and other harms. They can provide better transparency, helping users and stakeholders understand how AI decisions impact the fundamental rights of individuals.

AI assessments may offer strategic benefits to organizations, such as:

  • Identifying legal and regulatory liability: By detecting risks early — and at each stage of the AI development lifecycle — organizations can proactively address regulatory compliance gaps and be better positioned to reduce exposure to regulatory fines and litigation.
  • Protecting brand reputation: Transparent assessments demonstrate accountability, strengthening stakeholder trust and reducing reputational risks linked to AI misuse or errors.
  • Enhancing operational stability: Continuous monitoring and risk evaluation embedded in the assessment process can support swift response to potential failures or threats, maintaining continuity of operations.
  • Encouraging sustainable innovation: Robust governance frameworks arising from structured impact assessments help provide clear guidelines, enabling responsible and trusted AI-driven innovation.

Cyber insurance implications

Conducting an AI impact assessment can significantly assist an organization in purchasing and renewing cyber insurance in several ways:

  1. Risk identification and mitigation: An AI impact assessment helps identify potential risks associated with AI systems, including vulnerabilities that could be exploited in cyber attacks. By understanding these risks, organizations can implement mitigation strategies to help reduce their exposure, which may lead to more favorable terms when negotiating cyber insurance policies.
  2. Enhanced understanding of AI systems: The assessment provides a comprehensive understanding of how AI systems are integrated into business operations, including data handling and processing. This knowledge is crucial for insurers to accurately assess the risk profile of the organization, potentially leading to more tailored and cost-effective insurance solutions.
  3. Compliance and regulatory adherence: AI impact assessments often include evaluations of compliance with applicable regulations and standards. Demonstrating adherence to these can reassure insurers of the organization's commitment to maintaining robust cybersecurity practices, potentially resulting in lower premiums or better coverage options.
  4. Improved incident response plans: By identifying weaknesses in AI systems, organizations can enhance their incident response plans. This preparedness can be a key factor for insurers when determining coverage terms, as it may indicate the organization's ability to effectively manage and recover from cyber incidents.
  5. Data protection and privacy: The assessment can highlight areas where data protection and privacy measures need strengthening. Insurers value organizations that prioritize data security, as this reduces the likelihood of data breaches, influencing the cost and scope of cyber insurance.
  6. Strategic risk management: Conducting an AI impact assessment aligns with a strategic approach to risk management, showcasing the organization's proactive stance in identifying and managing risks. This can improve the organization's reputation with insurers, potentially leading to better negotiation leverage during policy renewals.
  7. Documentation and reporting: The assessment provides detailed documentation of AI-related risks and mitigation strategies, which can be shared with insurers during the underwriting process. This transparency can facilitate smoother negotiations and more accurate risk assessments.

AI Impact assessments should not be a one and done exercise but can be used in an ongoing process integrated throughout the AI lifecycle, from design and development to deployment and production. This approach helps organizations remain adaptable to evolving AI risks, regulatory requirements and societal expectations, and may help prevent costly disruptions and liabilities.

Effective AI impact assessments, as required by the EU AI Act, are essential for AI deployment. By strategically integrating these assessments, organizations not only protect individuals, but can also proactively mitigate internal risks, safeguard data and reputation, and establish sustainable frameworks for long-term innovation and growth.

Author Information


Sources

1 Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems, EU Artificial Intelligence Act, accessed 28 May 2025.

2 Article 6: Classification Rules for High-Risk AI Systems, EU Artificial Intelligence Act, accessed 28 May 2025.

3 AI Risk Management Framework, NIST AI Risk Management Framework, accessed 28 May 2025.

4 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White House, 30 Oct 2023.


Disclaimer

Gallagher provides insurance, risk management and consultation services for our clients in response to both known and unknown risk exposures. When providing analysis and recommendations regarding potential insurance coverage, potential claims and/or operational strategy in response to national emergencies (including health crises), we do so from an insurance/risk management perspective, and offer broad information about risk mitigation, loss control strategy and potential claim exposures. We have prepared this commentary and other news alerts for general informational purposes only and the material is not intended to be, nor should it be interpreted as, legal or client-specific risk management advice. General insurance descriptions contained herein do not include complete insurance policy definitions, terms and/or conditions, and should not be relied on for coverage interpretation. The information may not include current governmental or insurance developments, is provided without knowledge of the individual recipient's industry or specific business or coverage circumstances, and in no way reflects or promises to provide insurance coverage outcomes that only insurance carriers control.

Gallagher publications may contain links to non-Gallagher websites that are created and controlled by other organizations. We claim no responsibility for the content of any linked website, or any link contained therein. The inclusion of any link does not imply endorsement by Gallagher, as we have no responsibility for information referenced in material owned and controlled by other parties. Gallagher strongly encourages you to review any separate terms of use and privacy policies governing use of these third-party websites and resources.