Abstract
Vision Transformers, known for their innovative architectural design and modeling capabilities, have gained significant attention in computer vision. This paper presents a dual-path approach that leverages the strengths of the Multi-Axis Vision Transformer (MaxViT) and the Improved Multiscale Vision Transformer (MViTv2). It starts by encoding speech signals into Constant-Q Transform (CQT) spectrograms and Mel Spectrograms with Short-Time Fourier Transform (Mel-STFT). The CQT spectrogram is then fed into the MaxViT model, while the Mel-STFT is input to the MViTv2 model to extract informative features from the spectrograms. These features are integrated and passed into a Multilayer Perceptron (MLP) model for final classification. This hybrid model is named the 'MaxViT and MViTv2 Fusion Network with Multilayer Perceptron (MaxMViT-MLP).' The MaxMViT-MLP model achieves remarkable results with an accuracy of 95.28% on the Emo-DB, 89.12% on the RAVDESS dataset, and 68.39% on the IEMOCAP dataset, substantiating the advantages of integrating multiple audio feature representations and Vision Transformers in speech emotion recognition.
Original language | English |
---|---|
Pages (from-to) | 18237-18250 |
Number of pages | 14 |
Journal | IEEE Access |
Volume | 12 |
DOIs | |
Publication status | Published - 2024 |
Externally published | Yes |
Keywords
- Emo-DB
- ensemble learning
- IEMOCAP
- RAVDESS
- spectrogram
- Speech emotion recognition
- vision transformer
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering