Computational speech segregation based on an auditory-inspired modulation analysis

J Acoust Soc Am. 2014 Dec;136(6):3350. doi: 10.1121/1.4901711.

Abstract

A monaural speech segregation system is presented that estimates the ideal binary mask from noisy speech based on the supervised learning of amplitude modulation spectrogram (AMS) features. Instead of using linearly scaled modulation filters with constant absolute bandwidth, an auditory-inspired modulation filterbank with logarithmically scaled filters is employed. To reduce the dependency of the AMS features on the overall background noise level, a feature normalization stage is applied. In addition, a spectro-temporal integration stage is incorporated in order to exploit the context information about speech activity present in neighboring time-frequency units. In order to evaluate the generalization performance of the system to unseen acoustic conditions, the speech segregation system is trained with a limited set of low signal-to-noise ratio (SNR) conditions, but tested over a wide range of SNRs up to 20 dB. A systematic evaluation of the system demonstrates that auditory-inspired modulation processing can substantially improve the mask estimation accuracy in the presence of stationary and fluctuating interferers.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Humans
  • Models, Theoretical
  • Perceptual Masking*
  • Phonetics*
  • Signal Processing, Computer-Assisted*
  • Software
  • Sound Spectrography*
  • Speech Acoustics*
  • Speech Perception*
  • Speech Reception Threshold Test