This is the official repository of the two papers.
(π DIDiffGes elaborates on the process and inference of semi-implicit accelerated diffusion. This part plays a crucial role in the efficient generation of HoloGest.)
π₯(AAAI 2025) DIDiffGes: Decoupled Semi-Implicit Diffusion Models for Real-time Gesture Generation from Speech
The 39th Annual AAAI Conference on Artificial Intelligence (AAAI), 2025
π₯(3DV 2025) HoleGest: Decoupled Diffusion and Motion Priors for Generating Holisticly Expressive Co-speech Gestures
International Conference on 3D Vision 2025 (3DV), 2025
[Project Page] [Paper]
We used an avatar to conduct an audio narration of our method, vividly elaborating on the details of our method to everyone.
demo_avatar.mp4
- [2025/03/17] Code of HoloGest release β
- [2024/12/15] DIDiffGes got accepted by AAAI 2025! π
- [2024/11/10] HoloGest got accepted by 3DV 2025! π
This code was tested on NVIDIA GeForce RTX 3070 Ti and requires:
- conda3 or miniconda3
cd ./main/
bash pip_install.sh
- Download CLIP model , ASRand pre-trained weights from here. Put all the folder in
./main/holgest/. - Download TextEncoder and put it in
./main/model/ - Download WavLM weights from here and put it in
./main/hologest - Download Motion-Prior from here and put it in
./main/hologest
bash demo.sh
git_hologest_demo.mp4
@inproceedings{yu2023acr,
title = {ACR: Attention Collaboration-based Regressor for Arbitrary Two-Hand Reconstruction},
author = {Yu, Zhengdi and Huang, Shaoli and Chen, Fang and Breckon, Toby P. and Wang, Jue},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023}
}
The pytorch implementation of HoloGest is based on ExpGest. We use some parts of the knowledgement from SiDDMs and some part of code from [DIDiffGes]. We thank all the authors for their impressive works!
For technical questions, please contact cyk19990422@gmail.com
For commercial licensing, please contact shaolihuang@tencent.com