null

President Biden has issued the "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," establishing new standards for artificial intelligence (AI) safety and security. 1, 2

The executive order includes several key actions:

  • It requires developers of powerful AI systems to share safety test results and critical information with the US government.
  • It mandates the development of standards, tools and tests to ensure the safety and security of AI systems.
  • It addresses the risks of using AI in engineering dangerous biological materials and establishes guidelines for detecting AI-generated content and authenticating official content.

On the issue of data privacy, the order directs federal support for privacy-preserving techniques in AI development. The order also aims to prevent algorithmic discrimination and ensure fairness in the criminal justice system by developing best practices for the use of AI.

In terms of consumer protection, the order focuses on the responsible use of AI in healthcare and the development of affordable and safe drugs. It also aims to leverage AI in education and support workers by mitigating the risks of workplace surveillance, bias and job displacement, and investing in workforce training.

In addition, the order seeks to promote innovation and competition through a national AI research tool and to support small developers and entrepreneurs.

Lastly, the order directs the government to issue guidance for the responsible use of AI, improve AI procurement and accelerate the hiring of AI professionals. The Administration will also work with allies and partners to establish an international framework for AI governance.

Recommendations for AI compliance

Based on the summary of the executive order, here are some recommendations for insureds in various industries to address AI compliance:

Developers of AI systems. Insureds involved in developing powerful AI systems should ensure they comply with the requirement to share safety test results and critical information with the US government. They should establish processes to notify the government when training AI models that pose risks to national security, economic security, or public health and safety. Insureds should also conduct red-team safety tests and share the results with the government.

Critical infrastructure sectors. Insureds operating in critical infrastructure sectors should pay attention to the standards set by the National Institute of Standards and Technology for AI safety and security. They should implement these standards accordingly, as they will be relied upon by the Department of Homeland Security's Safety and Security Board. Additionally, insureds should address AI systems' threats to critical infrastructure and cybersecurity risks. Read more on meeting the challenges of the US National Cybersecurity Strategy.

Life Science projects. Insureds involved in Life Science projects should be prepared to comply with strong new standards for biological synthesis screening. They should ensure appropriate screening of AI-enabled processes and manage risks associated with the development of dangerous biological materials.

Content providers. Insureds involved in content creation and distribution should be aware of the need to establish standards and best practices for detecting AI-generated content and authenticating official content. They should follow the guidance provided by the Department of Commerce for content authentication and watermarking to label AI-generated content accurately.

Federal agencies. Insureds working with federal agencies should evaluate their data collection and use practices, especially commercially available information containing personally identifiable data. They should strengthen privacy guidance to account for AI risks and ensure compliance with privacy-preserving techniques.

Landlords, federal benefits programs and federal contractors. Insureds in these sectors should ensure that AI algorithms aren't used to exacerbate discrimination. They should follow the clear guidance provided to prevent algorithmic discrimination and cooperate with the Department of Justice and federal civil rights offices on investigating and prosecuting civil rights violations related to AI.

Healthcare industry. Insureds in the healthcare industry should embrace the responsible use of AI and establish safety programs to address harms or unsafe practices involving AI. They should also consider leveraging AI to develop affordable and life-saving drugs while ensuring patient safety.

Education sector. Insureds in the Education sector should explore the deployment of AI-enabled educational tools, such as personalized tutoring. They should use available resources and support to ensure the responsible use of AI in transforming education.

Workers and workplaces. Insureds should develop principles and best practices to mitigate the potential harms and maximize the benefits of AI for workers. They should address job displacement, labor standards, workplace equity, health and safety. Insureds should also invest in workforce training and development accessible to all employees.

Small developers and entrepreneurs. Insureds in this category should take advantage of the technical assistance and resources provided to promote a fair, open and competitive AI ecosystem. They should seek support in commercializing AI breakthroughs and comply with the Federal Trade Commission's authorities.

Government agencies. Insureds working with government agencies should familiarize themselves with the guidance issued for the use of AI. They should ensure compliance with the standards to protect rights and safety, improve AI procurement and strengthen AI deployment. Insureds should also consider participating in the government-wide AI talent surge and provide AI training for employees.

It's important for insureds in various industries to closely monitor developments in AI compliance related to the executive order and future rules, seek legal advice if necessary and adapt their practices to ensure compliance with the new standards and requirements outlined in the order.

Leveraging Cyber insurance

Cyber insurance and other insurance policies may help organizations that believe they may be impacted by claims related to the use of emerging technology. Claims arising from specific cyber incidents, cyberattacks or alleged wrongful collection and/or sharing information — either directly or indirectly through AI — may be covered. Many policies provide access to crisis services, including breach coaches, IT forensics investigators and several other breach response experts. Those with Cyber insurance should be mindful of claim reporting obligations, requirements to use insurance panel breach response vendors, evidence preservation and issues that may impact attorney-client privilege.

Organizations should also be aware of the rapidly evolving Cyber insurance products that may impact the scope of insurance coverage. The 2023 Cyber insurance market is changing rapidly, and we expect AI to have some form of impact to the products being offered. It's likely Cyber insurers may use various methods to reduce their cascading losses for regulatory risk such as the issues unfolding around the use of technology. Sub-limits and coinsurance are often imposed for certain cyber losses. In addition, some carriers have modified Cyber policy language to restrict or even exclude coverage for certain incidents that give rise costs incurred for regulatory investigations, lawsuits, settlements and fines.

Author Information