MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete Representations

Abstract

In this work, we present MoConVQ, a novel unified framework for physics-based motion control leveraging scalable discrete representations. Building upon vector quantized variational autoencoders (VQ-VAE) and model-based reinforcement learning, our approach effectively learns motion embeddings from a large, unstructured dataset spanning tens of hours of motion examples. The resultant motion representation not only captures diverse motion skills but also offers a robust and intuitive interface for various applications. We demonstrate the versatility of MoConVQ through several applications: universal tracking control from various motion sources, interactive character control with latent motion representations using supervised learning, physics-based motion generation from natural language descriptions using the GPT framework, and, most interestingly, seamless integration with large language models (LLMs) with in-context learning to tackle complex and abstract tasks.

Publication
In ACM Transactions on Graphics(Proceedings of SIGGRAPH2024)
Heyuan Yao(姚贺源)
Heyuan Yao(姚贺源)
Ph.D Student

I’m a Ph.D student from Peking University advised by Libin Liu.

Related