Abstract
Deep neural networks have been applied to audio spectrograms for respiratory sound classification. Existing models often treat the spectrogram as a synthetic image while overlooking its physical characteristics. In this paper, a Multi-View Spectrogram Transformer (MVST) is proposed to embed different views of time-frequency characteristics into the vision transformer. Specifically, the proposed MVST splits the mel-spectrogram into different-sized patches, representing the multi-view acoustic elements of a respiratory sound. The patches and positional embeddings are fed into transformer encoders to extract the attentional information among patches through a self-attention mechanism. Finally, a gated fusion scheme is designed to automatically weigh the multi-view features to highlight the best one in a specific scenario. Experimental results on the ICBHI dataset demonstrate that the MVST significantly outperforms state-of-the-art methods for classifying respiratory sounds. The code is available at: https://github.com/wentaoheunnc/MVST.
| Original language | English |
|---|---|
| Journal | ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
| DOIs | |
| Publication status | Published Online - 18 Mar 2024 |
Free Keywords
- Respiratory sound classification
- Melspectrogram
- Vision Transformer
- ICBHI dataset