Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
, 7, 44549

Polyphonic Sonification of Electrocardiography Signals for Diagnosis of Cardiac Pathologies

Affiliations

Polyphonic Sonification of Electrocardiography Signals for Diagnosis of Cardiac Pathologies

Jakob Nikolas Kather et al. Sci Rep.

Abstract

Electrocardiography (ECG) data are multidimensional temporal data with ubiquitous applications in the clinic. Conventionally, these data are presented visually. It is presently unclear to what degree data sonification (auditory display), can enable the detection of clinically relevant cardiac pathologies in ECG data. In this study, we introduce a method for polyphonic sonification of ECG data, whereby different ECG channels are simultaneously represented by sound of different pitch. We retrospectively applied this method to 12 samples from a publicly available ECG database. We and colleagues from our professional environment then analyzed these data in a blinded way. Based on these analyses, we found that the sonification technique can be intuitively understood after a short training session. On average, the correct classification rate for observers trained in cardiology was 78%, compared to 68% and 50% for observers not trained in cardiology or not trained in medicine at all, respectively. These values compare to an expected random guessing performance of 25%. Strikingly, 27% of all observers had a classification accuracy over 90%, indicating that sonification can be very successfully used by talented individuals. These findings can serve as a baseline for potential clinical applications of ECG sonification.

Conflict of interest statement

The authors declare no competing financial interests.

Figures

Figure 1
Figure 1. Principle of polyphonic sonification of multi-channel ECG data.
(a) The Cabrera circle shows the direction of the ECG signal channels projected on a frontal plane through the human body. (b) In our technique, each of the six standard ECG channels is assigned a musical note so that the human auditory system can identify each channel even if multiple channels are played simultaneously. The sampling rate of the ECG signals was 257 Hz and the data shown in (b) correspond to 10 seconds.
Figure 2
Figure 2. Pathological ECG samples used for auditory data analysis.
(a–d) Sample ECG signals for clinically relevant cardiac pathologies. These samples were used for training of human observers. Subsequently, other samples were used to assess user performance in a blinded study. Channels mapped to lower frequencies are shown in blue/green hues while channels mapped to higher frequencies are shown in yellowish hues. The sampling rate of the ECG signals was 257/sec and the data correspond to 10 seconds.
Figure 3
Figure 3. Group performance in blinded assessment of ECG signals.
Average correct classification rate for each of the three observer groups.
Figure 4
Figure 4. Classification performance of blinded observers.
Data analysis results for 12 sound samples and 22 observers are shown. White cells show correct classification, black cells show wrong classification. Observers are ordered by their group (G) with G 1 = medical students with completed cardiology training or resident physicians; G 2 = medical students before their cardiology course; G 3 = science students without any formal training in cardiology.
Figure 5
Figure 5. Confusion matrix of the classification.
Classification performance is shown for all 22 observers for 264 classification tasks. Units on the color bar represent the number of samples. The vertical bar represents the true class, the horizontal bar represents the class assigned by human observers. Correctly classified samples are on the diagonal, while off-diagonal samples are not correctly classified. It can be seen that the class “PVC” showed the highest number of correct classifications (STEMI = ST-elevation myocardial infarction, PVC = premature ventricular contraction, A. Fib. = atrial fibrillation).

Similar articles

See all similar articles

References

    1. O’Donoghue S. I. et al. . Visualizing biological data-now and in the future. Nat Methods 7, S2–4, doi: 10.1038/nmeth.f.301 (2010). - DOI - PubMed
    1. Walter T. et al. . Visualization of image data from cells to organisms. Nat Methods 7, S26–41, doi: 10.1038/nmeth.1431 (2010). - DOI - PMC - PubMed
    1. Hermann T. Taxonomy and Definitions for Sonification and Auditory Display. Proceedings of the 14th International Conference on Auditory Display (ICAD), Paris, France. Atlanta, GA, USA: Georgia Tech Library (2008).
    1. Staege M. S. A short treatise concerning a musical approach for the interpretation of gene expression data. Sci Rep 5, 1–13, doi: 10.1038/srep15281 (2015). - DOI - PMC - PubMed
    1. Brocks D. Musical patterns for comparative epigenomics. Clin Epigenetics 7, 1–5, doi: 10.1186/s13148-015-0127-8 (2015). - DOI - PMC - PubMed

Publication types

MeSH terms

Feedback