AI governance and managing risk ― the NIST Risk Management Framework
The National Institute of Standards and Technology (NIST) AI Risk Management Framework describes four specific functions — govern, map, measure and manage — to help organizations address AI risks.
The "govern" function relates to cultivating and implementing a risk management culture and applies to all stages of an organization's risk management. It covers:
- Policies, processes, procedures and practices related to the mapping, measuring and managing of AI risks. These must be present, transparent and implemented effectively.
- Accountability structures to ensure the appropriate teams are empowered, responsible and trained to deal with the risks. Individuals must be committed to a culture that considers and communicates AI risk.
- The prioritization of workforce inclusion, diversity and accessibility processes in the mapping, measuring and managing of AI risks.
- Processes for robust engagement with relevant AI actors.
- Policies and procedures to address AI risks and the benefits arising from third-party software and data as well as other supply chain issues.
Test cases point to rising scrutiny of board governance
AI-related legal cases are rapidly increasing, with class action filings in the US more than doubling from seven in 2023 to 15 in 2024. They demonstrate an increasing level of legal scrutiny surrounding the use of AI and subsequent growth in liability for organizations and their senior leaders.
Of the 15 AI-related filings in 2024, eight were in the technology sector. The communications and industrial sectors accounted for four and two filings, respectively, with one filing attributed to the consumer non-cyclical sector1.
As Gallagher's Lewin states, "We have seen instances of AI washing claims occurring in the US: Two asset managers have faced fines from the US Securities and Exchange Commission (SEC) for overstating or exaggerating their use of AI and its benefits for their stakeholders.
"Additionally, we're starting to see class action lawsuits emerge against companies engaging in AI washing," she continues. "The SEC has been monitoring AI closely for a significant period. Jurisdictions worldwide are being cautious about adopting overly stringent AI regulations, as they aim to avoid stifling innovation. Some regions, however, have taken a more heavy-handed approach than others."
Four litigation categories trending in 2025 include:
- AI washing claims — securities litigation
- AI-generated content — legal challenges, IP infringement
- AI hallucinations — legal sanctions
- Regulatory developments/challenges
AI washing is a relatively new and growing area of concern. While lawsuits to date haven't explicitly referred to the term, legal challenges have been raised over misleading advertising, false claims and overstated capability. This may change as legal determinations and judgments shape the basis of future legal cases and as courts set out clearer legal standards.
Companies and directors also may be exposed to litigation through biased algorithms — such as for screening recruitment candidates — even where these have been accessed via a third party.
"Increased use of AI may expose companies to employment practices liability claims, depending on how hiring algorithms are designed and used," notes Cassandra Shivers, claims advocacy leader, Executive Risk and Cyber, Gallagher.
"There is also potential D&O liability if public claims about AI quality or controls prove inaccurate, possibly leading to fiduciary breaches.
"There are examples in legal contexts where a lawyer or law firm has used AI to generate briefs and insights about case law that simply do not exist," says Shivers. "This can lead to serious consequences, including sanctions against lawyers or firms involved. Therefore, it's crucial to evaluate the information generated by AI to ensure that it is both valid and effective."
Recently, a wave of litigation has emerged around artificial intelligence: companies overstating capabilities, misleading marketing, hallucinated outputs, and regulatory challenges. These cases offer a view into how the adoption of AI is colliding with legal action.
Stress testing can help identify gaps in cover
With all the promises and opportunities AI presents, there are also new exposures for both companies and individual directors and officers (D&O). These need to be considered as senior officers carry out their duties and responsibilities in relation to AI adoption.
Partnering with your broker
From a D&O insurance perspective, it's important for senior leaders to explore various scenarios — with the support of their broker — to determine how coverage might respond and where there may be gaps in risk management or cover.
As AI evolves, legal challenges will continue to increase, setting precedents and revealing how D&O policy wording responds when tested. For the insurance industry, this also will test underwriting appetite and determine whether "silent AI" is an issue that needs to be tackled in a similar way to "silent cyber." Claims disputes could drive a more pronounced shift with the market explicitly excluding AI-related claims or offering affirmative cover.
As Kevin LaCroix anticipates, "We are likely to see increasing amounts of corporate and securities litigation having to do with AI-related risks — and not just the failure to disclose AI-related risks, but allegations relating to AI misuse or the faulty deployment of AI tools, or the failure to adapt to or address the competitive urgency of AI development.
"Litigants will also seek to hold corporate managers and their employers accountable for the failure to avoid, in the deployment of AI tools, discrimination, privacy and IP violations, or consumer fraud," he notes.
Indeed, the scope of D&O liability insurance has already begun to evolve with insurers starting to offer more specialized coverage that reflects:
- Risks tied to misleading AI claims and the legal liabilities that can arise from AI washing
- Coverage for data breaches, cyber attacks and privacy violations resulting from AI-driven systems
- Protection for risks tied to AI ethics, including algorithmic bias and unethical practices
- Adaptation to evolving regulations on AI, environmental, social and governance (ESG) concerns, and the responsibility of directors and officers in overseeing AI systems
Preparing for an evolving landscape
In the meantime, companies should continue to test how their current D&O policy wordings would respond to AI-related liabilities, while keeping abreast of legal and regulatory developments. Partnering with underwriters and brokers can help stress test the scope of coverage and address any gaps before they become a problem.
"With no clear rules or safe harbors for AI development, boards remain ultimately accountable," says Steve Bear, executive director of Professional and Financial Risks, Gallagher. "Given the breadth of AI application within a business, involving D&O specialists can help boards better understand the full spectrum of risks across all stakeholders.
"While D&O insurance offers some protection, many AI-related risks fall outside the scope of today's standard policy wording and must be addressed through strong governance frameworks. This will continue to be an evolving risk management landscape."
Published August 2025