Subspace learning for facial expression recognition: An overview and a new perspective

Cigdem Turan, Rui Zhao, Kin Man Lam, Xiangjian He

Research output: Journal PublicationArticlepeer-review

7 Citations (Scopus)

Abstract

For image recognition, an extensive number of subspace-learning methods have been proposed to overcome the high-dimensionality problem of the features being used. In this paper, we first give an overview of the most popular and state-of-the-art subspace-learning methods, and then, a novel manifold-learning method, named soft locality preserving map (SLPM), is presented. SLPM aims to control the level of spread of the different classes, which is closely connected to the generalizabilityof the learned subspace. We also do an overview of the extension of manifold learning methods to deep learning by formulating the loss functions for training, and further reformulate SLPM into a soft locality preserving (SLP) loss. These loss functions are applied as an additional regularization to the learning of deep neural networks. We evaluate these subspace-learning methods, as well as their deep-learning extensions, on facial expression recognition. Experiments on four commonly used databases show that SLPM effectively reduces the dimensionality of the feature vectors and enhances the discriminative power of the extracted features. Moreover, experimental results also demonstratethat the learned deep features regularized by SLP acquire a better discriminability and generalizability for facial expression recognition.

Original languageEnglish
JournalAPSIPA Transactions on Signal and Information Processing
DOIs
Publication statusAccepted/In press - 2021
Externally publishedYes

Keywords

  • Deep learning
  • Facial expression recognition
  • Subspace learning

ASJC Scopus subject areas

  • Signal Processing
  • Information Systems

Fingerprint

Dive into the research topics of 'Subspace learning for facial expression recognition: An overview and a new perspective'. Together they form a unique fingerprint.

Cite this