Key considerations in building your AI risk assessment

Author: John Farley

null

The cyber risk landscape continues to evolve in lockstep with emerging technology. This dynamic environment poses a daunting frontier to any risk manager, many of whom are now being forced to make both near- and long-term decisions about adopting artificial intelligence (AI) based tools. In fact, mass utilization of some generative AI tools has already begun, with over 1 million active users of ChatGPT in the first two months of its existence.1

A strategy for striking the delicate balance between making sound, risk-based decisions while staying technologically competitive to enhance productivity will be a priority for risk managers. A good starting point is to develop an artificial intelligence risk assessment and embed it into the overall enterprise risk management program.

AI usage risks defined

At their core, artificial intelligence tools like ChatGPT use a generative pre-trained transformer (GPT) that's designed to absorb massive amounts of data on any given subject while providing immediate, human-like output for anyone who asks. While on the surface this new technology has the potential to provide vast new efficiencies, several potential threats may emerge, including:

  • Data bias: Outcomes are impacted when AI systems are trained with inaccurate or incomplete information, which ultimately can lead organizations to make unfair assumptions or even implement discriminatory practices.
  • Misinformation campaigns: Malicious actors will likely find generative AI an ideal launching pad for misinformation campaigns. The credibility of the information that this technology may blindly vacuum from public sources is an open question — one that needs careful consideration before an organization relies on and acts upon on AI-derived advice for key business decisions.
  • Regulatory risk: At this point, regulation of AI usage is in its infancy. However, we predict increased regulatory scrutiny of the use of this new technology in the near future from a variety of global regulator-driven privacy regimes. Compliance requirements may extend to those contributing to its development and to those using it to provide goods and services to their clients.
  • Privacy liability: Several privacy laws related to collecting, storing and sharing personally identifiable information (PII) will likely apply to AI usage. Careful consideration of legal compliance related to these issues should be a priority.
  • Liability related to intellectual property: Organizations need to be wary of liability risk when using intellectual property and AI technology. These risks can manifest if intellectual property becomes part of the learning models and ultimately AI generated outputs. Without proper permissions and credits, organizations may expose themselves to copyright, trademark and patent infringement litigation.

    The federal government response

    The White House has already emphasized the need to address AI related risks. The Biden-Harris Administration recently announced that major AI developers — including Amazon, Google, Meta, Microsoft and OpenAI — agreed to a voluntary commitment to manage threats associated with AI usage.2 Specifically, AI developers agreed to submit to external security testing of their AI systems before they're released. The developers also agreed to share risk-related findings across governments, academia and the public.

    The key components of an AI risk assessment

    Ultimately, organizations should consider updating risk management programs to incorporate AI usage and may do so by performing their own AI risk assessment. The AI risk assessment should incorporate several important points, including.3

  • Key terms and definitions: There needs to be a general understanding of terms such "artificial intelligence," "machine learning," "algorithms" and others.
  • Current inventories of tools: The assessment should address the updated listing of all AI tools that the organization currently uses or can access. If multiple AI systems exist, there also needs to be a clear understanding of whether the systems can communicate with each other or have dependencies that may heighten risk.
  • AI governance programs: All current AI usage and governance programs should be addressed in the assessment, along with commentary on any needed updates or additional written policies and procedures.
  • Data elements used: The organization should document the exact data that current AI tools can use and whether their usage could lead to potentially harmful outcomes.
  • Emerging regulation: State, federal and international regulations for AI usage is emerging rapidly. The assessment should address all AI regulatory frameworks specific to compliance requirement needs.
  • Contractual liability: If contractual agreements with third parties provide AI system usage and/or limitations, the contract should contain several considerations. These considerations may include but aren't limited to the data elements shared, intellectual property ownership, potential legal liabilities, insurance requirements, security control mandates about data usage and protection, and any state, federal or international regulatory compliance requirements.

Any organization considering embracing generative AI tools should consider embedding a formal risk management plan for AI usage. A cross-divisional effort between several key stakeholders will be required. Risk managers will need to coordinate efforts between legal, compliance, human resources, operations, IT, marketing and others, while closely monitoring emerging risk as AI systems become more widely used.

Author Information