The minimum data set (MDS) 3.0 is set for implementation Oct. 1, 2010. As physicians, nurses, and others learn more about the changes to the instrument, they are thinking about how they will train staff, what the changes will mean to their data collection, and how MDS data will contribute to quality improvement opportunities. In anticipation of implementing and using MDS 3.0, there are mixed emotions, including anticipation and...fear.
 
“Change is good, but change is hard,” says Tom Dudley, MS, RN, technical advisor in the Division of Chronic and Post Acute Care, Office of Clinical Standards and Quality, at the Centers for Medicare & Medicaid Services (CMS).
 
“We knew that change is very difficult, and we didn’t want to make change just for the sake of change. We wanted to increase the clinical relevance for providers, and we wanted to honor the patient and family in the documentation process,” says Debra Saliba, MD, MPH.
 
“If facilities are spending the time to collect data, getting something that is useful to them makes it a better investment,” says Saliba, who is director of the University of California, Los Angeles/Jewish Homes Borun Center and the RAND Corp. and Veterans Administration (VA) principal investigator for MDS 3.0 national development and testing.

Designed For New Care Modes

MDS 3.0 was designed to introduce advances in assessment measures, improve accuracy and validity of data, improve user satisfaction, eliminate poorly performing items, redesign the form, and enable briefer assessment periods for clinical items. It also is intended to maintain the ability to use MDS data for quality indicators (QIs), quality measures (QMs), quality improvement, and payment.
 
Perhaps the most important goal, however, was to increase the document’s clinical relevance to providers. But the new MDS version 3.0 not only is designed to improve resident assessment and the quality of clinical information for practitioners, it also pays tribute to the culture change movement by creating expectations for patients to be involved in providing information and input during assessments and have a role in decision making.
 
The changes to the MDS were based on measurement science and guidance from stakeholders, users, and content experts. It has been 15 years since the MDS was last updated, and “that’s way too long,” Dudley says. CMS took the changes seriously and wanted to provide facilities with an instrument that would be useful and easier to use, he says.
 
Saliba explains, “From the facility perspective, they invest a great deal in completing the MDS, and the content becomes the foundation for quality measures. Having better, more accurate data that gives facilities more information about changes over time can be immensely helpful.”

Smooth Flying For Pilot Testing

Before finalizing MDS 3.0, CMS contracted with RAND Corp. and Harvard University to evaluate the proposed revisions.
 
In total, 3.0 was tested in a national sample of more than 4,500 residents from 71 community nursing facilities in eight states and 19 VA facilities in six states.
 
The national pilot showed that changes to the MDS produced a more efficient assessment and that better quality information was obtained in less time.
 
Findings showed that MDS 3.0 items showed either excellent or very good reliability when used by both research and facility nurses; they also indicated that MDS 3.0 improved assessments while decreasing time to complete. The average time for completing 3.0 was 45 percent less than the average time for MDS 2.0.
However, even before the national study, there was a smaller pilot. “We wanted to have the best instrument possible going into the national study,” says Saliba. “We made numerous changes at that point.”
 
In the national study, CMS learned that there were some items in the section about preferences for daily activities that people said didn’t work. “So we took them out,” she notes, adding, “There also were some things that we tried that didn’t work better than what was in the MDS 2.0, so we didn’t implement those changes. We were adamant that there was no reason to make people change unless the changes were for the better and made a positive difference.”

Nurses Give It A Thumbs Up

All of the efforts to test the instrument and solicit feedback clearly made a positive difference. Facility nurses testing MDS 3.0 generally were enthusiastic about the revised instrument. Not only did 81 percent say that it is more clinically relevant, but 85 percent said that they believed that it would help them identify problems that might not otherwise have been noticed. Additionally, 89 percent said that the MDS 3.0 items allowed for a more accurate report of a resident’s characteristics, 79 percent indicated that the revised instrument better reflects best practices and standards, and 85 percent said they found questions on 3.0 to be worded more clearly.
 
During a recent MDS 3.0 train-the-trainer session, Dudley says, “Participants were overwhelmingly excited about it.” He adds, “We’ve received a lot of positive comments, although there are still some tweaks that need to be made.” Ultimately, he observes, “this instrument will give providers more information about the needs of residents and help them develop a [better] plan of care.”

Big Change With Big Potential?

One of the most significant changes to the MDS—the inclusion of patient interviews as part of assessments—is implemented throughout the instrument, and it is causing some anticipatory anxiety for facility staff. But while this addition may create some worry for providers, it ultimately is designed to enrich the data and help the facility design care plans that address patients’ individual needs and issues. As Saliba says, “We’ve always been tasked to bring the resident into the process, and now we have a way to do this. It will help ensure a more
transparent process for consumers.”
 
While nurses and others have expressed some concerns about how the resident interviews would work and if they could be used effectively, the pilot study of MDS 3.0 described above showed that it successfully included resident voices. According to the nurses completing the instrument, the majority of residents were able to complete the interview section and that the items provided useful clinical insights.

Build Staff Interviewing Skills

Providing staff with adequate and consistent training on how to conduct interviews may increase their confidence, MDS planners say. The Picker Institute is developing a training video for that purpose that will enable staff to watch residents being interviewed.
 
“We know that in teaching staff, a picture is worth a thousand words,” Saliba notes. “When I conduct trainings, I first try to help staff understand the why, then we talk about ways to make interviews go more smoothly. Then we show sample interviews and have people practice with each other.” She stresses that it will take time and experience for people to feel comfortable with the interviews.
 
At a program session at the American Medical Directors Association’s annual symposium in March, Karen Leible, MD, CMD, said, “This is exciting as we’re moving forward in that we are talking to residents, interviewing them, and including them. It’s not just about subjective observations,” added Leible, who is chief clinical services officer at Pinon Management, which provides full-service operations for skilled nursing and assisted living.
 
Facility nurses in the national pilot study seemed to share Leible’s enthusiasm. In fact, 84 percent said that the structured interview sections—on cognition, mood, customary routine, activities, and pain—improved their knowledge of residents’ health
conditions.
 
The involvement of residents and families in the assessment process seems to mirror CMS’ commitment to promoting resident-centered care. As Saliba notes, “We hope this is a useful tool for facilities to move forward on person-centered care. I think that having these questions will give facilities something to facilitate person-centered assessments. This, hopefully, will be useful to leaders in culture change and more recent adopters of person-centered care.” The patient interviews are not the only aspect of MDS 3.0 that has caused initial anxiety for some. “The most common comment is about MDS 3.0’s length—38 pages,” says Dudley.
 
However, he stresses that the type font is much larger to increase readability, the page breaks are more practical and user-friendly, and definitions and other information that people need as they complete the assessments are right there on the page. “It is not really longer; instead, it is more efficient and clinically relevant,” he says.

New Assessments, Quality Data

One significant change to the MDS is that Section V is now titled, “Care Area Assessment (CAA) Summary,” instead of “Resident Assessment Protocols (RAPs).” The 20 CAA problem areas are the same as the 18 RAPs—with the addition of “Pain” and “Return to Community Referral.” Instead of a standardized care plan, triggers will identify areas that need a care plan, and the facility will have the ability to develop one based on the facility’s clinical practices.
 
For example, the mood care area is triggered if any of the following is elicited during the interview: 
  • The resident’s response indicates that he/she has had thoughts that he/she would be better off dead or has had thoughts of hurting him/herself; or
  • The staff assessment of resident mood suggests that the resident feels life isn’t worth living, wishes for death, or attempts to harm self.
Dudley notes that “the actual algorithm is a bit more complex, but basically the resident’s response triggers the need for the facility to intervene through the development of an individualized plan of care specifically addressing areas of concern impacting that person’s well-being.”
 
There are several areas of the MDS where changes involve more personal and direct assessments designed to increase resident input and encourage more individualized care planning. One of these is Section C (Cognitive Patterns). Added here is the use of the Brief Interview for Mental Status (BIMS). This involves the repetition and recall of three words—sock, blue, and bed. “My facilities already have instituted the BIMS as a standard test,” said Leible. “We get better data from this tool versus others. We so often think that [residents with dementia] can’t tell us anything, but we can get information from them.”
 
In fact, in the national pilot study 90 percent of residents were able to complete the test. A vast majority of nurses (78 percent) said they prefer this new assessment method, and 88 percent reported that the BIMS enabled them to get new insights into residents’ cognitive abilities.
 
Also in Section C, the Confusion Assessment Method replaces old delirium items. This tool includes two parts—an assessment instrument that screens for overall cognitive impairment and a second part that includes only those four features found to have the greatest ability to distinguish delirium or reversible confusion from other types of cognitive impairment.
 
In Section D (Mood), the MDS 3.0 includes the Patient Health Questionnaire (PHQ-9) for screening depression. This is a checklist of nine symptoms of depression that is completed via resident interview. “This tool brought out emotions when nurses tested it. One told me that she cried when she interviewed one patient. He had always seemed so happy, and no one had realized he was depressed. The interview was able to elicit feelings that he had kept concealed from staff and others. This is something of real value,” says Dudley.

Data Collection: Just The Beginning

Of course, as with all components of the MDS 3.0, there are options to use observations in making assessments. If a resident is unable to be interviewed, then staff can document observations about signs and symptoms over a two-week period. 
 
Once the information is collected, says Saliba, “You need a strategy about how to use it for ongoing monitoring so that it is part of your facility’s flow of activities.
 
“One challenge with assessments is that you will find out something about a patient, and you need to know what you will do with this information. If you conduct an assessment and learn that something is important to the person, you need to determine how you will act on that information.”
 
Saliba stresses that facility leaders need to help staff feel more comfortable thinking through how assessments and discoveries will fit in with work flow and interdisciplinary action plans.
 
“That is where the physicians and all interdisciplinary team members come in—determining how we will integrate data into meaningful action and care plans,” she says.
 
Joanne Kaldy is a freelance writer and communications consultant based in Harrisburg, Pa.