Long term and post-acute care providers are under pressure to manage complex dementia symptoms with fewer staff, rising acuity, and closer scrutiny of psychotropic use. At the same time, interest is growing in whether artificial intelligence (AI) can support person-centered care rather than replace it.

A multi-site memory care operator in Oregon recently completed a second pilot of AI companions in six secured memory care communities. The goal in this second phase was not simply to see whether residents would talk to an AI avatar, but to understand how this technology might fit inside an already strong person-centered model—and what it might reveal about residents’ day-to-day mood and behavior.

Inside Pilot 2

Pilot 2 followed 21 residents with diagnosed dementia living in six memory care communities. The residents represented a clinically complex group: varying dementia etiologies (Alzheimer’s, vascular, Lewy body, alcohol-related, and mixed types), multiple chronic comorbidities, high assistance needs with activities of daily living, and existing behavior plans addressing concerns such as sundowning, wandering, trauma-linked distress, depression, and anxiety.

All participating communities were already operating with an HCBS-aligned, person-centered care model, using a “Best Friends”–style approach that emphasizes knowing the person, preserving autonomy, and prioritizing non-pharmacologic strategies before medications.

Over a 29-day observation window, the project team integrated five sources of information for each resident: standardized clinical and behavioral summaries, daily mood and sentiment documentation, PRN medication records, AI usage logs, and narrative descriptions of triggers and responses. This design allowed the team to view the AI companion as one intervention among many inside a complex care environment—not a stand-alone solution.

Emotional Vital Signs: What the Mood Data Showed

For 18 of the 21 residents, complete numeric sentiment scores were available. Staff documented daily mood using descriptors such as calm, cooperative, tearful, or engaged, along with a numeric score on a –2 to +2 scale (–2 = very negative, 0 = neutral, +2 = very positive).

Across the group, average sentiment scores ranged from 1.5 to 2.5, with an overall mean of 1.97, indicating day-to-day emotional tone was mildly to clearly positive. Notably, no resident had a neutral or negative average mood during the period.

To make this more actionable for frontline teams, the project categorized residents into three “emotional vital sign” bands: mild positive (1.5–1.89), moderate positive (1.9–2.19), and high positive (2.2 or higher). Most residents clustered in the moderate positive band, with a smaller subset in the high-positive range.

Qualitative notes frequently included words like pleasant, smiling, redirectable, and cooperative. When negative descriptors appeared—tearful, worried, or irritable—they were usually linked to understandable events such as pain, constipation, noise, personal-care tasks, or concerns about belongings, and they generally resolved with targeted non-pharmacologic approaches.

Operationally, this suggests that even in a high-acuity memory care population, teams can sustain broadly positive emotional baselines when person-centered, non-pharmacologic care is consistently delivered. The AI companion layered onto that foundation rather than replacing it.

How Residents Actually Used the AI

AI usage data offered a nuanced picture. Across residents with available data, the average over 29 days was 18.6 total calls and 46.5 total minutes. Many residents engaged with the AI in short, “snack-sized” sessions: a few minutes of conversation, singing, or reminiscing, often paired with another calming activity.

Some residents used the companion frequently and for longer durations, while others did not engage with the AI at all during the window yet still maintained mildly to clearly positive mood averages. When the team examined only residents with non-zero AI usage and sentiment scores, more minutes tended to correlate with slightly higher average mood, but variability and small sample size prevent any causal claims.

The practical takeaway for operators is that AI companions appear safe and potentially supportive for some residents, and resident preference will drive usage patterns.

PRN Use and Behavior Outcomes

A central concern for regulators and families is whether new technology will alter PRN medication patterns, especially for psychotropics. In this pilot, psychotropic PRN use was rare, and when administered, doses were generally effective without escalation.

Most PRN medications given during the period were for pain, constipation, and other physical comfort needs, not behavioral crises. Serious behavior incidents—severe agitation, aggression, or elopement—were uncommon, even among residents with prior histories of those concerns. Updated behavior plans, consistent routines, and environmental adjustments appeared to carry most of the clinical weight.

Importantly, there was no evidence that AI usage increased agitation, PRN reliance, or problem behaviors. Instead, staff often used the AI companion as an additional non-pharmacologic option while addressing pain, comfort, or environmental triggers.

How Teams Integrated the AI Companion

Narrative documentation revealed how staff folded the AI into everyday routines. Teams paired AI time with known calming moments—after meals, during sundowning, or while waiting for family visits. Residents who enjoyed conversation, faith-based topics, or music often gravitated toward the AI as another friend.

Staff noted that laughing, reminiscing, or singing with the avatar sometimes helped reset tense situations, buying time to address underlying needs before medications were considered. At the same time, staff emphasized that the AI companion did not replace human relationships. Rather, it functioned as a consistent, patient conversation partner and a structured way to document and visualize mood trends.

Lessons for Providers

Several practical lessons emerge for organizations considering similar technologies:

  • Keep person-centered care first. Positive mood patterns were rooted in an existing, robust care model. AI sharpened and supported that work but did not substitute for it.
  • Treat sentiment as an emotional vital sign. Simple visuals summarizing average mood by resident and over time can act as an early warning system, prompting earlier non-pharmacologic interventions before small declines become crises.
  • Expect wide variation in resident use. Some residents will engage frequently; others rarely or not at all. Respecting those preferences is essential to person-centered practice.
  • Invest in staff training and boundaries. Short scenario-based training, clear privacy expectations, and reassurance that AI is a support—not a replacement—help ensure the tool feels like a resource rather than a burden.
  • Acknowledge limitations. This pilot had a short time frame, a small analytic sample, and no comparison group, so findings should be viewed as directional rather than definitive.

Looking Ahead

As dementia prevalence rises and workforce challenges persist, providers are searching for tools that can amplify human care, not automate it away. Early results from this second AI companion pilot suggest that, when implemented within a strong person-centered framework, AI companions may help teams visualize emotional patterns, add another non-pharmacologic option, and maintain positive mood in a clinically complex memory care population.

Chris MasonFurther work—longer follow-up, larger samples, comparisons with non-AI communities, and staff-focused outcomes—will be essential. For now, emotional vital signs offer a promising way to bring data and person-centered practice together at the bedside.

Christian A. Mason, D.B.A., is managing member of Pacific Living Centers and can be reached at chris.mason.shm@gmail.com. Monica Tsai, DSocSci, is CEO of CloudMind US Inc. Carl Mason is president and COO of Senior Housing Managers, Caitlin Buckley is chief people officer with Senior Housing Managers, and Michele Nixon is vice president of operations at Pacific Living Centers.

Listen to our Perspectives in Long Term Care podcast: Artificial Intelligence in Long Term Care.

Provider magazine includes information from a variety of sources, such as contributing experts. The views expressed by external contributors do not necessarily reflect the views of Provider magazine and AHCA/NCAL. Learn how to submit an article.