LaMuCo: Large-Scale Multilingual Conversation Speech Recognition Challenge

Qingqing Zhang, Lei Luo, Simin Xu, Yongjing Chen, Chuang Li, Sheng Li, Ruili Wang

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

Abstract

Magic Data, in collaboration with M3Oriental, has jointly initiated the “Large-scale Multilingual Speech Recognition Challenge.” Centered on multilingualism, this challenge seeks to explore and develop advanced multilingual speech dialogue systems that facilitate real-time cross-linguistic interaction, thereby strengthening technological collaboration within the Asia-Pacific region. Embracing a spirit of open-source cooperation, we invite developers, researchers, and enterprises worldwide to participate actively in advancing multilingual speech processing technologies, overcoming linguistic barriers, and fostering global technological collaboration and knowledge sharing.

Original languageEnglish
Title of host publicationProceedings of the 6th ACM International Conference on Multimedia in Asia Workshops, MMAsia 2024 Workshops
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9798400713149
DOIs
Publication statusPublished - 26 Dec 2024
Externally publishedYes
Event6th ACM International Conference on Multimedia in Asia Workshops, MMAsia 2024 Workshops - Auckland, New Zealand
Duration: 3 Dec 20246 Dec 2024

Publication series

NameProceedings of the 6th ACM International Conference on Multimedia in Asia Workshops, MMAsia 2024 Workshops

Conference

Conference6th ACM International Conference on Multimedia in Asia Workshops, MMAsia 2024 Workshops
Country/TerritoryNew Zealand
CityAuckland
Period3/12/246/12/24

Keywords

  • large language model
  • multilingual
  • spoken dialogue

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'LaMuCo: Large-Scale Multilingual Conversation Speech Recognition Challenge'. Together they form a unique fingerprint.

Cite this