Background: Readiness to perform lifesaving interventions during emergencies is based on a person's preparation to proficiently execute the skills required. Graphically plotting the performance of a tourniquet user in simulation has previously aided us in developing our understanding of how the user actually behaves. The purpose of this study was to explore performance assessment and learning curves to better understand how to develop best teaching practices.
Methods: These were retrospective analyses of a convenience sample of data from a prior manikin study of 200 tourniquet uses among 10 users. We sought to generate hypotheses about performance assessments relevant to developing best teaching practices. The focus was on different metrics of user performance.
Results: When one metric was chosen over another, failure counts summed cumulatively over 200 uses differed as much as 12-fold. That difference also indicated that the degree of challenge posed to user performance differed by the metric chosen. When we ranked user performance with one metric and then with another, most (90%; nine of 10) users changed rank: five rose and four fell. Substantial differences in performance outcomes resulted from the difference in metric chosen, which, in turn, changed how the outcome was portrayed and thus interpreted. Hypotheses generated included the following: The usefulness of a specific metric may vary by the user's level of skill from novice to expert; demonstration of the step order in skill performance may suffice for initial training of novices; a mechanical metric of effectiveness, like pulse stoppage, may aid in later training of novices; and training users how to practice on their own and self-assess performance may aid their self-development.
Conclusion: The outcome of the performance assessments varied depending on the choice of metric in this study of simulated use of tourniquets.