The Language of Motion:
Unifying Verbal and Non-verbal Language of 3D Human Motion

Stanford University
*indicates equal contribution.

Abstract

Human communication is inherently multimodal, involving a combination of verbal and non-verbal cues such as speech, facial expressions, and body gestures. Modeling these behaviors is essential for understanding human interaction and for creating virtual characters that can communicate naturally in applications like games, films, and virtual reality. However, existing motion generation models are typically limited to specific input modalities—either speech, text, or motion data—and cannot fully leverage the diversity of available data. In this paper, we propose a novel framework that unifies verbal and non-verbal language using multimodal language models for human motion understanding and generation. This model is flexible in taking text, speech, and motion or any combination of them as input. Coupled with our novel pre-training strategy, our model not only achieves state-of-the-art performance on co-speech gesture generation but also requires much less data for training. Our model also unlocks an array of novel tasks such as editable gesture generation and emotion prediction from motion. We believe unifying the verbal and non-verbal language of human motion is essential for real-world applications, and language models offer a powerful approach to achieving this goal.

Multimodal Language Model Framework

Multimodal Framework

We employ modality-specific tokenizers to process various input modalities. Specifically, we train a compositional body motion VQ-VAE to tokenize face, hands, upper body, and lower body motions into discrete tokens, combining these modalityspecific vocabularies(audio and text) into a unified multimodal vocabulary. During training, mixed tokens from different modalities are used as input, and the output is generated through an encoder-decoder language model. The mixed tokens are fed into the transformer encoder, while the decoder predicts the probability distribution of the next token in an autoregressive manner at each step.

Co-speech Gesture Generation

Editable Gesture Generation

Text-to-motion Generation

Emotion Understanding

Failure Case

BibTeX

@inproceedings{
      chen2024body_of_language,
      title={The Language of Motion: Unifying Verbal and Non-verbal Language of 3D Human Motion},
      author={Changan Chen and Juze Zhang and Shrinidhi Kowshika Lakshmikanth and Yusu Fang and Ruizhi Shao and Gordon Wetzstein and Li Fei-Fei and Ehsan Adeli},
      booktitle={arXiv},
      year={2024},
}