Fine-Grained Temporal-Enhanced Transformer for Dynamic Facial Expression Recognition

Yaning Zhang, Jiahe Zhang, Linlin Shen, Zitong Yu, Zan Gao

Research output: Journal PublicationArticlepeer-review

Abstract

Dynamic facial expression recognition (DFER) plays a vital role in understanding human emotions and behaviors. Existing efforts tend to fall into a single modality self-supervised pretraining learning paradigm, which limits the representation ability of models. Besides, coarse-grained temporal modeling struggles to capture subtle facial expression representations from various inputs. In this letter, we propose a novel method for DFER, termed fine-grained temporal-enhanced transformer (FTET-DFER), which consists of two stages. First, we employ the inherent correlation between visual and auditory modalities in real videos, to capture temporally dense representations such as facial movements and expressions, in a self-supervised audio-visual learning manner. Second, we utilize the learned embeddings as targets, to achieve the DFER. In addition, we design the FTET block to study fine-grained temporal-enhanced facial expression features based on intra-clip locally-enhanced relations as well as inter-clip locally-enhanced global relationships in videos. Extensive experiments show that FTET-DFER outperforms the state-of-the-arts through within-dataset and cross-dataset evaluation.

Original languageEnglish
Pages (from-to)2560-2564
Number of pages5
JournalIEEE Signal Processing Letters
Volume31
DOIs
Publication statusPublished - 2024
Externally publishedYes

Keywords

  • Dynamic facial expression recognition
  • self-supervised learning
  • transformer

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Fine-Grained Temporal-Enhanced Transformer for Dynamic Facial Expression Recognition'. Together they form a unique fingerprint.

Cite this