Attention Based Convolutional Neural Network with Multi-frequency Resolution Feature for Environment Sound Classification

Neural Process Lett. 2022 Oct 24:1-16. doi: 10.1007/s11063-022-11041-y. Online ahead of print.

Abstract

The environmental sound classification has great research significance in the fields of intelligent audio monitoring and other fields. A novel multi-frequency resolution (MFR) feature is proposed in this paper to solve the problem that the existing single frequency resolution time-frequency features of sound cannot effectively express the characteristics of multiple types of sound. The MFR feature is composed of three features with different frequency resolutions, which are compressed in varying degrees at the time dimension. This method not only has the effect of data augmentation but also can obtain more context information during the feature extraction. And the MFR features of Log-Mel Spectrogram, Cochleagram, and Constant Q-Transform are combined to form a multi-channel MFR feature. Also, a network named SacNet is built, which can effectively solve the problem that the time-frequency feature map of sound contains more invalid information. The basic structural unit of the SacNet consists of two parallel branches, one using depthwise separable convolution as the main feature extractor, and the other using spatial attention module to extract more effective information. Experiment results have demonstrated that the proposed method achieves the state-of-the-art accuracy of 97.5%, 93.1%, and 95.3% on three benchmark datasets of ESC10, ESC50, and UrbanSound8K respectively, which are increased by 3.3%, 0.5%, and 2.3% respectively compared with the previous advanced methods.

Keywords: Convolutional neural network; Depthwise separable convolution; Environment sound classification; Multi-frequency resolution; Spatial attention module; Time–frequency feature.