
DNA-Rendering-Processed Dataset
Project Page | Paper | Code | Model
To enable Diffuman4D model training, we meticulously process the DNA-Rendering dataset by recalibrating camera parameters, optimizing image color correction matrices (CCMs), predicting foreground masks, and estimating human skeletons.
To promote future research in the field of human-centric 3D/4D generation, we have open-sourced our re-annotated labels for the DNA-Rendering dataset in this repo, which includes 1000+ human multi-view video sequences. Each sequence contains 48 cameras, 225 (or 150) frames, totaling 10 million images.
Usage
To use this dataset, please:
- Please concurrently (1) fill out this form and (2) request access to the dataset on this page.
- Download the dataset by following the guidelines here.
Cite
@inproceedings{jin2025diffuman4d,
title={Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models},
author={Jin, Yudong and Peng, Sida and Wang, Xuan and Xie, Tao and Xu, Zhen and Yang, Yifan and Shen, Yujun and Bao, Hujun and Zhou, Xiaowei},
booktitle={International Conference on Computer Vision (ICCV)},
year={2025}
}
@article{2023dnarendering,
title={DNA-Rendering: A Diverse Neural Actor Repository for High-Fidelity Human-centric Rendering},
author={Wei Cheng and Ruixiang Chen and Wanqi Yin and Siming Fan and Keyu Chen and Honglin He and Huiwen Luo and Zhongang Cai and Jingbo Wang and Yang Gao and Zhengming Yu and Zhengyu Lin and Daxuan Ren and Lei Yang and Ziwei Liu and Chen Change Loy and Chen Qian and Wayne Wu and Dahua Lin and Bo Dai and Kwan-Yee Lin},
journal={arXiv preprint},
volume={arXiv:2307.10173},
year={2023}
}
- Downloads last month
- 937
Models trained or fine-tuned on krahets/dna_rendering_processed

Video-to-Video
•
Updated
•
133
•
2