Non-autistic persons modulate their speech rhythm while talking to autistic individuals

PLoS One. 2023 Sep 28;18(9):e0285591. doi: 10.1371/journal.pone.0285591. eCollection 2023.


How non-autistic persons modulate their speech rhythm while talking to autistic (AUT) individuals remains unclear. We investigated two types of phonological characteristics: (1) the frequency power of each prosodic, syllabic, and phonetic rhythm and (2) the dynamic interaction among these rhythms using speech between AUT and neurotypical (NT) individuals. Eight adults diagnosed with AUT (all men; age range, 24-44 years) and eight age-matched non-autistic NT adults (three women, five men; age range, 23-45 years) participated in this study. Six NT and eight AUT respondents were asked by one of the two NT questioners (both men) to share their recent experiences on 12 topics. We included 87 samples of AUT-directed speech (from an NT questioner to an AUT respondent), 72 of NT-directed speech (from an NT questioner to an NT respondent), 74 of AUT speech (from an AUT respondent to an NT questioner), and 55 of NT speech (from an NT respondent to an NT questioner). We found similarities between AUT speech and AUT-directed speech, and between NT speech and NT-directed speech. Prosody and interactions between prosodic, syllabic, and phonetic rhythms were significantly weaker in AUT-directed and AUT speech than in NT-directed and NT speech, respectively. AUT speech showed weaker dynamic processing from higher to lower phonological bands (e.g. from prosody to syllable) than NT speech. Further, we found that the weaker the frequency power of prosody in NT and AUT respondents, the weaker the frequency power of prosody in NT questioners. This suggests that NT individuals spontaneously imitate speech rhythms of the NT and AUT interlocutor. Although the speech sample of questioners came from just two NT individuals, our findings may suggest the possibility that the phonological characteristics of a speaker influence those of the interlocutor.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Autistic Disorder*
  • Female
  • Humans
  • Male
  • Middle Aged
  • Phonetics
  • Speech
  • Speech Perception*
  • Young Adult

Grants and funding

This research was supported by JST CREST 'Cognitive Feelings' (Grant Number: JPMJCR21P4), JSPS KAKENHI (Grant Number 22KK0157, 22H05210, 21H05063, 21H05053, 20K22676), JST Moonshot Goal 9(JPMJMS2296), Institute for AI and Beyond, the University of Tokyo, and World Premier International Research Centre Initiative (WPI), MEXT, Japan. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.