Getting your Trinity Audio player ready...
null

Across industries, people are taking note of the impact of artificial intelligence (AI). And while AI adoption is still in its infancy, organizations are rapidly working to adapt and integrate the technology within their business models. Our question today is whether disability service providers get left behind and, if not, how can care providers and nonprofit organizations begin to introduce AI into their disability services?

How can AI make a difference?

Hannah Patterson is a Gallagher producer associate in the US who has expertise in coverage for organizations that serve people who are blind or visually impaired. Talking about the possibilities of AI, Hannah points to adaptive technology training, which empowers the disabled community to use AI‑powered tools effectively.

Modern screen readers, speech‑to‑text systems, AI-integrated smartphones, object‑recognition apps and smart mobility devices can help individuals with vision loss, especially with street layouts, nearby landmarks and shop signs, as well as with avoiding potential hazards.

And these examples aids are among many supports offered. For example, adaptive-tech training also supports people with hearing or communication disabilities who can leverage these tools to turn speech into real‑time text and navigate electronic devices through eye‑tracking.

Early innovations such as these show genuine promise, but there's potential for so much more. Besides helping disabled people, care organizations are now turning to AI to optimize their internal operations as well. Ian Ackerman, Gallagher's area vice president with a focus on intellectual and developmental disabilities (IDD) in the US, highlights that some nonprofit organizations are integrating AI agents to support their various back office and administrative tasks.

Legal contract review, human resources support, compliance, safety and risk management are more accessible than ever. For front office operations, customer and consumer-facing technology, AI is allowing organizations to provide more and broader support 24 hours a day, 365 days a year.

What's fascinating about AI innovation for people with disabilities is the pace at which it's evolving. However, it's difficult to gauge the true impact of emerging AI tools, as they are still in the early stages of adoption, bringing both excitement and concern.

Emerging AI use cases for people with disabilities

In the workplace, AI writing assistants can help neurodivergent workers refine emails without the exhausting effort of constantly needing to mask their natural style.

For people who struggle with large blocks of text, AI can summarize long emails or documents into more easily digestible bullet points. Some tools can analyze the tone of an incoming message to help a user understand if it’s urgent, polite, or frustrated, which is useful for individuals who find social nuances challenging.

Reading product labels while shopping or finding the right ingredients while cooking can be challenging for people with vision loss. Modern AI‑powered smartphone apps can help by reading printed text and identifying objects, giving users instant access to the information they need. With a quick tap on the phone or a simple voice command, the user can get a clear idea of the ingredient or nutritional details or even receive recipe suggestions.

While AI tools are game changers, they still require interaction, whether through voice command or gesture. Instead of switching between multiple apps to manage their tasks, users can rely on an AI agent.

For example, AI agents could join a meeting on a disabled person's behalf, generate a transcript, schedule followups and even draft emails — pulling data from each connected AI application without any manual intervention.

Seeing through the hype: The growing risks of AI in disability services

Misinformation, bias, data breaches and security threats have emerged as the most significant risks associated with AI use. For people with disabilities, these risks are heightened because these AI tools may affect their daily functioning. When adopting AI solutions, care organizations need to be mindful of the following risks:

Lack of clinical accuracy. Generative AI systems may produce polished responses that, while aesthetically pleasing, are factually incorrect. Inaccurate output (hallucinations) may lead to inappropriate guidance, faulty instructions or errors in automated captions.

Sensitive healthcare information often flows through these systems, and once it's online, it becomes vulnerable to cyber‑attacks.
Gail Murray, principal broker, ANZIIF (senior associate) CIP, QPIB

These risks become more significant when AI is being used for clinical assessment. AI-generated summaries may lack a professional's judgement or assessment, potentially affecting an organizations reputation and its ability to secure claims, funding or government support.

In 2025, the Administrative Review Tribunal in Australia rejected a National Disability Insurance Scheme (NDIS) claim because the supporting report contained incorrect information, reportedly originating from an AI‑generated physiotherapy assessment.1

Bias, hallucination and accessibility concerns. AI tools learn from the data on which they're trained, which means underrepresentation of certain disabilities data can lead to biased, misleading or incorrect outputs. Some likely scenarios could be:

  • Without the right training data on diverse speech patterns, assistive voice systems may misrecognize or not respond to users with speech disabilities.
  • Automated screening systems may misinterpret disability‑related behaviors such as errors or risks.
  • Many AI systems operate as "black boxes," — their decision-making processes aren't easily understood. This lack of transparency can make it difficult for disability service organizations to identify and correct errors or biases in the AI's outputs

Data privacy, cyber risks and protection gaps. Disability services increasingly rely on digital tools that process sensitive data, including health information, behavioral patterns and biometric signals. This dependence, in turn, increases their exposure to cyberattacks.

As Gail Murray, principal broker, Australia emphasizes, "Sensitive healthcare information often flows through these systems, and once it's online, it becomes vulnerable to cyber‑attacks."

A single data breach can disrupt services and erode trust, yet many of these care organizations still lack adequate cyber insurance or haven't kept pace with new AI‑related coverage needs.

Lack of empathy. "There's growing concern about the lack of empathy that can be coded into AI tools, particularly when used to help people with disabilities," says Louise McConnell, SVP, Commercial Insurance, Canada and national practice leader, Nonprofit. "How can we help to ensure that good humans aren't being screened out of the employee recruitment process? How do we ensure that individuals with disabilities are treated fairly? These are some big questions that still need to be answered."

The need for consent and transparency. Paul Eden, Gallagher's managing director of Care and Charities, UK Commercial Division, stresses that care organizations must be transparent about how they use AI, because technology's output can directly influence lives.

Individuals have the right to know when AI is involved in supporting or advising them. Not informing disabled individuals and their families about AI's involvement raises ethical concerns.

The AI risk landscape is rapidly evolving, and so are regulations across regions to ensure the safe and ethical use of these tools. We expect to see a patchwork of new regulations coming from the local, state and federal government that may increase the difficulty of compliance — especially for multi-state entities. But with a strong grasp of these standards, care organizations can integrate AI into disability services with confidence.

Ensuring safe use of AI: Key regulations

Adopting AI responsibly in disability services

Eden urges care organizations to use AI thoughtfully when working with vulnerable people. He stresses the need to balance efficiency and cost benefits with a strong focus on human judgment and strong ethical oversight. Gallagher's specialists have these recommendatons:

  • Allow technology to assist the process, not run the process. Always double-check key information with a human review.
  • Stay informed about local and national laws on accessibility, privacy and AI use.
  • Work with a trusted third‑party IT provider who understands accessibility standards and secure data handling.
  • Build internal AI capability and expertise.
  • Conduct regular AI impact assessments to help identify, evaluate and mitigate risks.
  • Consult legal and cybersecurity experts when selecting AI tools. Include your insurance broker and risk advisors in the discussion.
  • Use regular audits to keep systems accurate and fair.
  • Run pilots with your actual user groups first before full-fledged deployment. Ensure your organization has a documented AI Incident Response Plan.

According to Ian Ackerman, Gallagher area vice president, IDD Specialty, US, "Smart adoption requires a cross-discipline approach. Nonprofits should work closely with competent legal counsel, technology providers and insurance brokers to ensure AI is adopted intelligently and ethically."

Strengthening disability care through smarter risk management

For care providers, insurance arrangements and risk management serve as key resilience layers to protect their initiatives from financial shocks and help them continue their missions to provide support to disabled individuals.

Smart adoption requires a cross-discipline approach. NGOs should work closely with competent legal counsel, technology providers and insurance brokers to ensure AI is adopted intelligently and ethically.
Ian Ackerman, Gallagher area vice president, (IDD) Specialty, US

Consulting with a specialist will help in understanding how (or if) policies respond to AI‑related risks. To address evolving challenges, some insurers are adding endorsements, while others are adding exclusions in commercial general liability insurance, Directors and Officers (D&O) liability insurance, professional liability, Errors and Omissions (E&O) insurance and, in some cases, cyber liability insurance along with other management liability policies.

Where AI use leads to disputes, companies may face legal action and often assume their commercial general liability policy will cover defense costs and damages. Insurers are increasingly pushing back on AI‑related claims, creating potential gaps in coverage that businesses need to address early.

The true measure of technological advancement isn't in how powerful technology becomes but in how seamlessly human potential can be amplified. AI technology can help care organizations deliver more impactful services, but, as Patterson emphasizes, it remains essential to keep humans in the loop and follow a trust-but-verify method while adopting AI.


Sources

1Karp, Paul. "Junk AI Reports Harming NDIS Users: Disability Groups," Financial Review, 17 Oct 2025.

2"Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring," ADA.gov, 12 May 2022.

3"HIPAA Security Rule to Strengthen the Cybersecurity of Electronic Protected Health Information," Federal Register, 6 Jan 2026.

4"AI Risk Management Framework," NIST, accessed 2 Feb 2026.

5"US AI Law Tracker — All States," Orrick, accessed 2 Feb 2026.

6"High-Level Summary of the AI Act," EU Artificial Intelligence Act, 27 Feb 2024.

7"How do We Ensure Lawfulness in AI??" Information Commissioner's Office, 28 Oct 2024.

8Cukalevski, Emily. "An AI framework for NDIS providers" DSC, 20 Nov 2025.


Disclaimer

The information contained herein is offered as insurance Industry guidance and provided as an overview of current market risks and available coverages and is intended for discussion purposes only. This publication is not intended to offer financial, tax, legal or client-specific insurance or risk management advice. General insurance descriptions contained herein do not include complete Insurance policy definitions, terms, and/or conditions, and should not be relied on for coverage interpretation. Actual insurance policies must always be consulted for full coverage details and analysis.

Insurance brokerage and related services provided by Arthur J. Gallagher Risk Management Services, LLC License Nos. IL 100292093 / CA 0D69293