Comparing Basic Life Support Serious Gaming Scores With Hands-on Training Platform Performance Scores: Pilot Simulation Study for Basic Life Support Training

JMIR Serious Games. 2020 Nov 25;8(4):e24166. doi: 10.2196/24166.

Abstract

Background: Serious games enrich simulation-based health care trainings and improve knowledge, skills, and self-confidence of learners while entertaining them.

Objective: A platform which can combine performance data from a basic life support (BLS) serious game app and hands-on data based on the same scoring system is not available in the market. The aim of this study was to create such a platform and investigate whether performance evaluation of BLS trainings would be more objective compared to conventional Objective Structured Clinical Examination (OSCE) examinations if these evaluations were carried out with the platform which combines OSCE scoring criteria with sensor data retrieved from the simulator's sensors.

Methods: Participants were 25 volunteers (11 men [44.0%] and 14 [56.0] women) among Acıbadem Mehmet Ali Aydınlar University students without prior knowledge of the BLS protocol. A serious game module has been created for teaching learners the European Resuscitation Council Basic Life Support 2015 protocol. A second module called the hands-on module was designed for educators. This module includes a checklist used for BLS OSCE examinations and can retrieve sensor data such as compression depth, compression frequency, and ventilation volume from the manikin (CPR Lilly; 3B Scientific GmbH) via Bluetooth. Data retrieved from the sensors of the manikin enable educators to evaluate learners in a more objective way. Performance data retrieved from the serious gaming module have been combined with the results of the hands-on module. Data acquired from the hands-on module have also been compared with the results of conventional OSCE scores of the participants, which were obtained by watching the videos of the same trainings.

Results: Participants were considered successful in the game if they scored 80/100 or above. Overall, participants scored 80 or above in an average of 1.4 (SD 0.65) trials. The average BLS serious game score was 88.3/100 (SD 5.17) and hands-on average score was 70.7/100 (SD 17.3), whereas the OSCE average score was 84.4/100 (SD 12.9). There was no statistically significant correlation between success on trials (score ≥80/100), serious game, hands-on training app, and OSCE scores (Spearman rho test, P>.05). The mean BLS serious game score of the participants was 88.3/100 (SD 5.17), whereas their mean hands-on training app score was 70.7/100 (SD 17.3) and OSCE score was 84.4/100 (SD 12.9).

Conclusions: Although scoring criteria for OSCE and hands-on training app were identical, OSCE scores were 17% higher than hands-on training app scores. After analyzing the difference of scores between hands-on training app and OSCE, it has been revealed that these differences originate from scoring parameters such as compression depth, compression frequency, and ventilation volume. These data suggest that evaluation of BLS trainings would be more objective if these evaluations were carried out with the modality, which combines visual OSCE scoring criteria with sensor data retrieved from the simulator's sensors.

Trial registration: ClinicalTrials.gov NCT04533893; https://clinicaltrials.gov/ct2/show/NCT04533893.

Keywords: basic life support; medical simulation; serious gaming.

Associated data

  • ClinicalTrials.gov/NCT04533893