BIG-MOE: BYPASSING ISOLATED GATING FOR GENERALIZED MULTIMODAL FACE ANTI-SPOOFING

Yingjie Ma, Zitong Yu, Xun Lin, Weicheng Xie, Linlin Shen

Research output: Journal PublicationConference articlepeer-review

1 Citation (Scopus)

Abstract

In the domain of facial recognition security, multimodal Face Anti-Spoofing (FAS) is essential for countering presentation attacks. However, existing technologies encounter challenges due to modality biases and imbalances, as well as domain shifts. Our research introduces a Mixture of Experts (MoE) model to address these issues effectively. We identified three limitations in traditional MoE approaches to multimodal FAS: (1) Coarse-grained experts' inability to capture nuanced spoofing indicators; (2) Gated networks' susceptibility to input noise affecting decision-making; (3) MoE's sensitivity to prompt tokens leading to overfitting with conventional learning methods. To mitigate these, we propose the Bypass Isolated Gating MoE (BIG-MoE) framework, featuring: (1) Fine-grained experts for enhanced detection of subtle spoofing cues; (2) An isolation gating mechanism to counteract input noise; (3) A novel differential convolutional prompt bypass enriching the gating network with critical local features, thereby improving perceptual capabilities. Extensive experiments on four benchmark datasets demonstrate significant generalization performance improvement in multimodal FAS task. The code is released at https://github.com/murInJ/BIG-MoE.

Keywords

  • Face Anti-Spoofing
  • Mixture of Experts
  • Multimodal
  • Prompt Learning

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'BIG-MOE: BYPASSING ISOLATED GATING FOR GENERALIZED MULTIMODAL FACE ANTI-SPOOFING'. Together they form a unique fingerprint.

Cite this