Physics-based Animation

MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete Representations

We present MoConVQ, a uniform framework enabling simulated avatars to acquire diverse skills from large, unstructured datasets. Leveraging a rich and scalable discrete skill representation, MoConVQ supports a broad range of applications, including pose estimation, interactive control, text-to-motion generation, and, more interestingly, integrating motion generation with Large Language Models (LLMs).

ControlVAE: Model-Based Learning of Generative Controllers for Physics-Based Characters

We introduce ControlVAE, a novel model-based framework for learning generative motion control policies, which learns flexible skill embeddings for motion generation and downstream tasks.