Adversarial multi-task learning with inverse mapping for speech enhancement

Yuanhang Qiu, Ruili Wang, Feng Hou, Satwinder Singh, Zhizhong Ma, Xiaoyun Jia

Research output: Journal PublicationArticlepeer-review

10 Citations (Scopus)

Abstract

Adversarial Multi-Task Learning (AMTL) has demonstrated its promising capability of information capturing and representation learning, however, is hardly explored in speech enhancement. In this paper, we propose a novel adversarial multi-task learning with inverse mapping method for speech enhancement. Our method focuses on enhancing the generator's capability of speech information capturing and representation learning. To implement this method, two extra networks (namely P and Q) are developed to establish the inverse mapping from the generated distribution to the input data domains. Correspondingly, two new loss functions (i.e., latent loss and equilibrium loss) are proposed for the inverse mapping learning and the enhancement model training with the original adversarial loss. Our method obtains the state-of-the-art performance in terms of speech quality (PESQ=2.93, CVOL=3.55). For speech intelligibility, our method can also obtain competitive performance (STOI=0.947). The experimental results demonstrate that our method can effectively improve speech representation learning and speech enhancement performance.

Original languageEnglish
Article number108568
JournalApplied Soft Computing Journal
Volume120
DOIs
Publication statusPublished - May 2022
Externally publishedYes

Keywords

  • Adversarial multi-task learning
  • Deep neural networks
  • Inverse mapping learning
  • Speech enhancement

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'Adversarial multi-task learning with inverse mapping for speech enhancement'. Together they form a unique fingerprint.

Cite this