Music and speech are skills that require high temporal precision of motor output. A key question is how humans achieve this timing precision given the poor temporal resolution of somatosensory feedback, which is classically considered to drive motor learning. We hypothesise that auditory feedback critically contributes to learn timing, and that, similarly to visuo-spatial learning models, learning proceeds by correcting a proportion of perceived timing errors. Thirty-six participants learned to tap a sequence regularly in time. For participants in the synchronous-sound group, a tone was presented simultaneously with every keystroke. For the jittered-sound group, the tone was presented after a random delay of 10-190 ms following the keystroke, thus degrading the temporal information that the sound provided about the movement. For the mute group, no keystroke-triggered sound was presented. In line with the model predictions, participants in the synchronous-sound group were able to improve tapping regularity, whereas the jittered-sound and mute group were not. The improved tapping regularity of the synchronous-sound group also transferred to a novel sequence and was maintained when sound was subsequently removed. The present findings provide evidence that humans engage in auditory feedback error-based learning to improve movement quality (here reduce variability in sequence tapping). We thus elucidate the mechanism by which high temporal precision of movement can be achieved through sound in a way that may not be possible with less temporally precise somatosensory modalities. Furthermore, the finding that sound-supported learning generalises to novel sequences suggests potential rehabilitation applications.
Keywords: Action–perception coupling; Auditory feedback; Feedback error-based learning; Motor learning; Movement variability; Music; Timing.
Copyright © 2015 Elsevier B.V. All rights reserved.