: Towards Studio-Grade Character Animation via In-Context Learning of 3D-Consistent Pose RepresentationsAchieving character animation that meets the studio-grade production standards remains challenging despite recent progress. Existing approaches can transfer motion from a driving video to a reference image, but often fail to preserve structural fidelity and temporal consistency in wild scenarios involving complex motion and cross-identity animations. In this work, we present SCAIL (Studio-grade Character Animation via In-context Learning), a framework designed to address these challenges from two key innovations. First, we propose a novel 3D pose representation, providing a robust and flexible motion signal. Second, we introduce a full-context pose injection mechanism within a diffusion-transformer architecture, enabling effective spatio-temporal reasoning over full motion sequences. To align with studio-level requirements, we develop a curated data pipeline ensuring both diversity and quality, and establish a comprehensive benchmark for systematic evaluation. Experiments show that SCAIL achieves state-of-the-art performance and advances character animation toward studio-grade reliability and realism.
Exploration of Different Injection Methods
SCAIL builds upon Wan-I2V models and incorporates 3D-Consistent pose representation to learn precise identity-agnostic motion. After comparing different injection methods, we adopt full-context pose injection for the model to learn spatial-temporal motion characteristics. We leverage Pose-shifted RoPE to facilitate learning of spatial-temporal relation between video tokens and pose tokens.
Ballet
Straddle
Acrobats
Expressive Body Movements
Occluded Postures
Fighting Scenes
Motion of Nonstandard Figures
Anime Characters' Interactions
@article{yan2025scail,
title={SCAIL: Towards Studio-Grade Character Animation via In-Context Learning of 3D-Consistent Pose Representations},
author={Yan, Wenhao and Ye, Sheng and Yang, Zhuoyi and Teng, Jiayan and Dong, ZhenHui and Wen, Kairui and Gu, Xiaotao and Liu, Yong-Jin and Tang, Jie},
journal={arXiv preprint arXiv:2512.05905},
year={2025}
}