Identifying musical pieces from fMRI data using encoding and decoding models

Sci Rep. 2018 Feb 2;8(1):2266. doi: 10.1038/s41598-018-20732-3.

Abstract

Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation*
  • Adult
  • Auditory Cortex / physiology*
  • Female
  • Healthy Volunteers
  • Humans
  • Magnetic Resonance Imaging*
  • Male
  • Models, Neurological
  • Music*
  • Spatio-Temporal Analysis
  • Young Adult