MG-MotionLLM: A Unified Framework for Motion Comprehension and Generation across Multiple Granularities

Research output: Journal PublicationConference articlepeer-review

Abstract

Recent motion-aware large language models have demonstrated promising potential in unifying motion comprehension and generation. However, existing approaches primarily focus on coarse-grained motion-text modeling, where text describes the overall semantics of an entire motion sequence in just a few words. This limits their ability to handle fine-grained motion-relevant tasks, such as understanding and controlling the movements of specific body parts. To overcome this limitation, we pioneer MG-MotionLLM, a unified motion-language model for multi-granular motion comprehension and generation. We further introduce a comprehensive multi-granularity training scheme by incorporating a set of novel auxiliary tasks, such as localizing temporal boundaries of motion segments via detailed text as well as motion detailed captioning, to facilitate mutual reinforcement for motion-text modeling across various levels of granularity. Extensive experiments show that our MG-MotionLLM achieves superior performance on classical text-to-motion and motion-to-text tasks, and exhibits potential in novel fine-grained motion comprehension and editing tasks. Project page: CVI-SZU/MG-MotionLLM
Original languageEnglish
Pages (from-to)27849-27858
Number of pages10
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Publication statusPublished - Aug 2025

Free Keywords

  • multiple granularities
  • human motion generation
  • human motion understanding
  • human motion captioning
  • fine-grained human motion understanding
  • large language model (LLM)

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'MG-MotionLLM: A Unified Framework for Motion Comprehension and Generation across Multiple Granularities'. Together they form a unique fingerprint.

Cite this