Decoding Musical Training From Dynamic Processing of Musical Features in the Brain

Sci Rep. 2018 Jan 15;8(1):708. doi: 10.1038/s41598-018-19177-5.

Abstract

Pattern recognition on neural activations from naturalistic music listening has been successful at predicting neural responses of listeners from musical features, and vice versa. Inter-subject differences in the decoding accuracies have arisen partly from musical training that has widely recognized structural and functional effects on the brain. We propose and evaluate a decoding approach aimed at predicting the musicianship class of an individual listener from dynamic neural processing of musical features. Whole brain functional magnetic resonance imaging (fMRI) data was acquired from musicians and nonmusicians during listening of three musical pieces from different genres. Six musical features, representing low-level (timbre) and high-level (rhythm and tonality) aspects of music perception, were computed from the acoustic signals, and classification into musicians and nonmusicians was performed on the musical feature and parcellated fMRI time series. Cross-validated classification accuracy reached 77% with nine regions, comprising frontal and temporal cortical regions, caudate nucleus, and cingulate gyrus. The processing of high-level musical features at right superior temporal gyrus was most influenced by listeners' musical training. The study demonstrates the feasibility to decode musicianship from how individual brains listen to music, attaining accuracy comparable to current results from automated clinical diagnosis of neurological and psychological disorders.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation
  • Adult
  • Auditory Perception*
  • Brain / physiology*
  • Brain Mapping
  • Female
  • Humans
  • Magnetic Resonance Imaging
  • Male
  • Music / psychology*
  • Young Adult