SGEA 2015 CONFERENCE ABSTRACT (EDITED). Evaluating Interprofessional Teamwork During a Large-Scale Simulation. Courtney West, Karen Landry, Anna Graham, and Lori Graham. CONSTRUCT: This study investigated the multidimensional measurement of interprofessional (IPE) teamwork as part of large-scale simulation training.
Background: Healthcare team function has a direct impact on patient safety and quality of care. However, IPE team training has not been the norm. Recognizing the importance of developing team-based collaborative care, our College of Nursing implemented an IPE simulation activity called Disaster Day and invited other professions to participate. The exercise consists of two sessions: one in the morning and another in the afternoon. The disaster scenario is announced just prior to each session, which consists of team building, a 90-minute simulation, and debriefing. Approximately 300 Nursing, Medicine, Pharmacy, Emergency Medical Technicians, and Radiology students and over 500 standardized and volunteer patients participated in the Disaster Day event. To improve student learning outcomes, we created 3 competency-based instruments to evaluate collaborative practice in multidimensional fashion during this exercise.
Approach: A 20-item IPE Team Observation Instrument designed to assess interprofessional team's attainment of Interprofessional Education Collaborative (IPEC) competencies was completed by 20 faculty and staff observing the Disaster Day simulation. One hundred sixty-six standardized patients completed a 10-item Standardized Patient IPE Team Evaluation Instrument developed from the IPEC competencies and adapted items from the 2014 Henry et al. PIVOT Questionnaire. This instrument assessed the standardized or volunteer patient's perception of the team's collaborative performance. A 29-item IPE Team's Perception of Collaborative Care Questionnaire, also created from the IPEC competencies and divided into 5 categories of Values/Ethics, Roles and Responsibilities, Communication, Teamwork, and Self-Evaluation, was completed by 188 students including 99 from Nursing, 43 from Medicine, 6 from Pharmacy, and 40 participants who belonged to more than one component, were students at another institution, or did not indicate their institution. The team instrument was designed to assess each team member's perception of how well the team and him- or herself met the competencies. Five of the items on the team perceptions questionnaire mirrored items on the standardized patient evaluation: demonstrated leadership practices that led to effective teamwork, discussed care and decisions about that care with patient, described roles and responsibilities clearly, worked well together to coordinate care, and good/effective communication.
Results: Internal consistency reliability of the IPE Team Observation Instrument was 0.80. In 18 of the 20 items, more than 50% of observers indicated the item was demonstrated. Of those, 6 of the items were observed by 50% to 75% of the observers, and the remaining 12 were observed by more than 80% of the observers. Internal consistency reliability of the IPE Team's Perception of Collaborative Care Instrument was 0.95. The mean response score-1 (strongly disagree) to 4 (strongly agree)-was calculated for each section of the instrument. The overall mean score was 3.57 (SD = .11). Internal consistency reliability of the Standardized Patient IPE Team Evaluation Instrument was 0.87. The overall mean score was 3.28 (SD = .17). The ratings for the 5 items shared by the standardized patient and team perception instruments were compared using independent sample t tests. Statistically significant differences (p < .05) were present in each case, with the students rating themselves higher on average than the standardized patients did (mean differences between 0.2 and 0.6 on a scale of 1-4).
Conclusions: Multidimensional, competency-based instruments appear to provide a robust view of IPE teamwork; however, challenges remain. Due to the large scale of the simulation exercise, observation-based assessment did not function as well as self- and standardized patient-based assessment. To promote greater variation in observer assessments during future Disaster Day simulations, we plan to adjust the rating scale from "not observed," "observed," and "not applicable" to a 4-point scale and reexamine interrater reliability.
Keywords: assessment; disaster preparedness; interprofessional teams; training.