Human communication is inherently multimodal, involving a combination of verbal and non-verbal cues such as speech, facial expressions, and body gestures. Modeling these behaviors is essential for understanding human interaction and for creating virtual characters that can communicate naturally in applications like games, films, and virtual reality. However, existing motion generation models are typically limited to specific input modalities—either speech, text, or motion data—and cannot fully leverage the diversity of available data. In this paper, we propose a novel framework that unifies verbal and non-verbal language using multimodal language models for human motion understanding and generation. This model is flexible in taking text, speech, and motion or any combination of them as input. Coupled with our novel pre-training strategy, our model not only achieves state-of-the-art performance on co-speech gesture generation but also requires much less data for training. Our model also unlocks an array of novel tasks such as editable gesture generation and emotion prediction from motion. We believe unifying the verbal and non-verbal language of human motion is essential for real-world applications, and language models offer a powerful approach to achieving this goal.
@inproceedings{
chen2024body_of_language,
title={The Language of Motion: Unifying Verbal and Non-verbal Language of 3D Human Motion},
author={Changan Chen and Juze Zhang and Shrinidhi Kowshika Lakshmikanth and Yusu Fang and Ruizhi Shao and Gordon Wetzstein and Li Fei-Fei and Ehsan Adeli},
booktitle={arXiv},
year={2024},
}