Human image animation involves generating a video from a static image by following a specified pose sequence. Current approaches typically adopt a multi-stage pipeline that separately learns appearance and motion, which often leads to appearance degradation and temporal inconsistencies. To address these issues, we propose VividPose, an innovative end-to-end pipeline based on Stable Video Diffusion (SVD) that ensures superior temporal stability. To enhance the retention of human identity, we propose an identity-aware appearance controller that integrates additional facial information without compromising other appearance details such as clothing texture and background. This approach ensures that the generated videos maintain high fidelity to the identity of human subject, preserving key facial features across various poses. To accommodate diverse human body shapes and hand movements, we introduce a geometry-aware pose controller that utilizes both dense rendering maps from SMPL-X and sparse skeleton maps. This enables accurate alignment of pose and shape in the generated videos, providing a robust framework capable of handling a wide range of body shapes and dynamic hand movements. Extensive qualitative and quantitative experiments on the UBCFashion and TikTok benchmarks demonstrate that our method achieves state-of-the-art performance. Furthermore, VividPose exhibits superior generalization capabilities on our proposed in-the-wild dataset. Codes and models will be available.
@article{wang2024vividpose,
title={VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation},
author={Qilin Wang and Zhengkai Jiang and Chengming Xu and Jiangning Zhang and Yabiao Wang and Xinyi Zhang and Yun Cao and Weijian Cao and Chengjie Wang and Yanwei Fu},
journal={arXiv preprint arXiv:2405.18156v1},
website={https://Kelu007.github.io/vivid-pose/},
year={2024}
}