Abstract
Dynamic facial expression recognition (DFER) plays a vital role in understanding human emotions and behaviors. Existing efforts tend to fall into a single modality self-supervised pretraining learning paradigm, which limits the representation ability of models. Besides, coarse-grained temporal modeling struggles to capture subtle facial expression representations from various inputs. In this letter, we propose a novel method for DFER, termed fine-grained temporal-enhanced transformer (FTET-DFER), which consists of two stages. First, we employ the inherent correlation between visual and auditory modalities in real videos, to capture temporally dense representations such as facial movements and expressions, in a self-supervised audio-visual learning manner. Second, we utilize the learned embeddings as targets, to achieve the DFER. In addition, we design the FTET block to study fine-grained temporal-enhanced facial expression features based on intra-clip locally-enhanced relations as well as inter-clip locally-enhanced global relationships in videos. Extensive experiments show that FTET-DFER outperforms the state-of-the-arts through within-dataset and cross-dataset evaluation.
Original language | English |
---|---|
Pages (from-to) | 2560-2564 |
Number of pages | 5 |
Journal | IEEE Signal Processing Letters |
Volume | 31 |
DOIs | |
Publication status | Published - 2024 |
Externally published | Yes |
Keywords
- Dynamic facial expression recognition
- self-supervised learning
- transformer
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering
- Applied Mathematics