The rise of telehealth has reshaped modern health care, offering patients unprecedented access to remote care. Yet for millions of people with disabilities, these platforms often fall short. Complex interfaces, poorly designed tools, and a lack of adaptability exclude those who need telehealth the most. Artificial intelligence (AI) is poised to change this narrative by powering solutions that prioritize accessibility—but only if developers, clinicians, and policymakers collaborate to ensure these tools are equitable, ethical, and universally beneficial.
The Silent Barriers in Telehealth
Despite advancements, many telehealth platforms remain inaccessible to individuals with disabilities. Studies and audits have highlighted significant accessibility issues in health care platforms, particularly regarding screen readers and keyboard navigation compatibility. For example, research on open-source electronic health record (EHR) systems found partial accessibility, with challenges in screen reader compatibility and keyboard navigation. For someone with low vision, an unlabelled graph detailing medication instructions becomes a roadblock. For a person with ALS, a multi-step login process requiring precise mouse clicks can derail an entire appointment.
Regulatory changes are pushing health care providers to address these gaps. The Department of Health and Human Services issued a final rule requiring entities receiving federal funding to meet WCAG 2.1 Level AA standards for digital accessibility, with compliance deadlines set for May 11, 2026, or May 10, 2027, depending on the size of the organization. This shift reflects a growing recognition that digital accessibility is not optional, but it is a fundamental component of equitable care.
In long term and post-acute care (LTPAC) settings, the stakes are even higher. Patients with cognitive impairments, such as dementia, may struggle with voice-activated tools, while those recovering from strokes often face communication barriers that standard telehealth interfaces cannot accommodate.
AI-Powered Solutions Breaking Down Barriers
AI offers unique opportunities to address these challenges by automating accessibility adaptations and personalizing user experiences. Below are three critical areas where AI is making a difference:
1. Simplifying Medical Communication.
Patients with cognitive disabilities or limited health literacy often encounter dense, jargon-filled content in telehealth portals. Natural language processing models can analyze complex medical text and generate plain-language summaries. For example, AI can rephrase “Administer 5 mg of rivaroxaban daily for venous thromboembolism prophylaxis” to “Take one 5 mg blood thinner pill daily to prevent blood clots.” Early implementations show such adaptations reduce patient confusion and improve adherence to treatment plans.
2. Making Visual and Auditory Content Accessible.
Medical imaging is vital for diagnostics, but without descriptions, blind patients miss critical information. AI-driven computer vision tools can automatically generate alt-text for X-rays, MRIs, and ultrasounds. A chest X-ray might be described as “showing a 3 cm shadow in the lower right lung, possibly indicating pneumonia.” Similarly, speech recognition models tailored to atypical speech patterns that are common in conditions like Parkinson’s disease are improving transcription accuracy, ensuring patients’ voices are accurately captured during virtual visits.
3. Reducing Administrative Burdens.
Clinicians in LTPAC settings spend significant time on documentation, diverting attention from patient care. AI scribes trained on diverse datasets can automate visit summaries, flagging urgent needs like a nonverbal patient’s gestures signalling pain. Facilities piloting these tools report a 30 percent reduction in charting time, allowing staff to prioritize direct interactions with residents.
Navigating Ethical Challenges
While AI holds immense promise, its implementation requires careful consideration of ethical risks. Many speech recognition systems struggle with accents or speech impairments, leading to errors that disproportionately affect patients with disabilities. Training AI models on datasets inclusive of diverse voices including those with speech disorders is essential to prevent bias.
Privacy is another critical concern. Voice-activated tools and personalized interfaces often require sensitive health data. Developers must ensure compliance with regulations like HIPAA while maintaining transparency about how data is used. For example, AI systems should allow patients to opt out of data collection without losing access to core features.
Transparency in AI decision-making is equally vital. Patients deserve clear explanations when AI modifies their experience, such as adjusting font sizes for readability or enabling voice navigation.
Practical Steps for LTPAC Providers
1. Prioritize Inclusive Design.
Involve people with disabilities in the development and testing of AI tools. Feedback from users with lived experience can uncover overlooked barriers, such as the need for customizable interface layouts or alternative input methods like eye-tracking.
2. Train Teams on Accessibility Standards.
Equip staff with guidelines to evaluate AI tools, such as checking compatibility with screen readers or testing color contrast ratios for patients with visual impairments. Simple audits can prevent costly redesigns later.
3. Advocate for Supportive Policies.
Push for regulatory changes incentivizing accessibility, such as CMS reimbursements for AI tools that demonstrably improve patient outcomes. Collaboration with industry groups can amplify these efforts.
The Future of Inclusive Telehealth
The next generation of telehealth platforms will rely on adaptive AI systems that learn from user interactions and evolve to meet individual needs. Imagine interfaces that automatically adjust text size for patients with macular degeneration or AI avatars that provide real-time sign language translation during virtual visits.
However, technology alone cannot solve systemic inequities. Success hinges on a commitment to human-centered design, where accessibility is not an afterthought but a foundational principle. By fostering partnerships between developers, clinicians, and disability communities, the health care industry can ensure AI serves as a true equalizer, empowering every patient to access care with dignity and ease.
References
• Centers for Disease Control and Prevention. Disability and Health Data System.https://www.cdc.gov/media/releases/2024/s0716-Adult-disability.html
• U.S. Department of Health and Human Services. (2024). Telehealth Accessibility Report.
• World Health Organization. (2023). Global Report on Health Equity for Persons with Disabilities.
• Journal of Medical Systems. (2024). AI Applications in Medical Imaging Accessibility.
• HIPAA Journal. (2023). Ensuring Compliance in AI-Driven Health care Tools.
• Google. (2023). Project Guideline: Open-Source AI for Accessibility.
• Centers for Medicare & Medicaid Services. (2024). Health IT Accessibility Standards Update.
• Health Affairs. (2023). Overcoming Telehealth Barriers for Vulnerable Populations.
• New England Journal of Medicine AI. (2024). Addressing Bias in Health care AI.
• W3C. (2024). Web Content Accessibility Guidelines (WCAG) 2.2.
Ashim Upadhaya is a software engineer with over eight years of experience in full-stack development, specializing in building accessible and high-performance user interfaces and backend systems to ensure compliance with WCAG 2.0 and A11Y standards. His work focuses on creating inclusive digital health solutions that make wellness programs accessible to all users, including older adults and individuals with disabilities. Contact Ashim on LinkedIn at https://www.linkedin.com/in/ashimupadhaya/.