File size: 3,315 Bytes
7aeea5f a4b9910 7aeea5f fd89517 a4b9910 7aeea5f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
size_categories:
- n>1T
tags:
- Traditional Chinese Medicine
configs:
- config_name: TCM_Book_Corpus (Text)
data_files: TCM_pretrain_book_corpus.json
- config_name: TCM_Web_Corpus (Text)
data_files: TCM_pretrain_web_corpus.jsonl
- config_name: TCM_Web_Interleaved_Data (Text & Image)
data_files: TCM_pretrain_web_vision.json
- config_name: TCM_Book_Interleaved_Data (Text & Image)
data_files: TCM_pretrain_book_vision.json
- config_name: TCM__synthesized_vision (Text & Image)
data_files: TCM_pretrain_synthesized_vision.json
---
# <span>📚 Introduction</span>
This dataset is the pre-training dataset for [ShizhenGPT](https://github.com/FreedomIntelligence/ShizhenGPT), a multimodal LLM for **Traditional Chinese Medicine (TCM)**. We open-source the largest existing TCM corpus dataset (over 5B tokens) from TCM-related websites and books. Additionally, we also open-source the largest scale TCM image-text pretraining dataset.
For details, see our [paper](https://arxiv.org/abs/2508.14706) and [GitHub repository](https://github.com/FreedomIntelligence/ShizhenGPT).
# <span>📊 Dataset Overview</span>
The open-sourced pre-training dataset consists of five parts:
| | Modality | Description | Data Quantity |
| ---------------------------------- | ------------ | ------------------------------------------------------------------------- | ------------------------------ |
| TCM\_Book\_Corpus | 📝 Text | A cleaned corpus of 3,256 TCM textbooks. | \~ 0.5 B tokens |
| TCM\_Web\_Corpus | 📝 Text | A TCM corpus collected from the web. | Over 5B tokens |
| TCM\_Book\_Interleaved\_Data | 📝 Text, 👁️ Visual | Interleaved text-image data from 306 TCM books. | 41459 entries, 50690 images |
| TCM\_Web\_Interleaved\_Data | 📝 Text, 👁️ Visual | Interleaved text-image data from the TCM web corpus. | 505465 entries, 1143954 images |
| TCM\_pretrain\_synthesized\_vision | 📝 Text, 👁️ Visual | TCM image-text pairs generated from images and their context using GPT-4o. | 144239 entries, 159534 images |
> ⚠️ Note: Due to privacy and ethical concerns, TCM signal datasets (e.g., sound and pulse) are not provided. For some signal data, refer to the [Instruction Dataset](https://huggingface.co/datasets/FreedomIntelligence/TCM-Instruction-Tuning-ShizhenGPT).
# <span>📖 Citation</span>
If you find our data useful, please consider citing our work!
```
@misc{chen2025shizhengptmultimodalllmstraditional,
title={ShizhenGPT: Towards Multimodal LLMs for Traditional Chinese Medicine},
author={Junying Chen and Zhenyang Cai and Zhiheng Liu and Yunjin Yang and Rongsheng Wang and Qingying Xiao and Xiangyi Feng and Zhan Su and Jing Guo and Xiang Wan and Guangjun Yu and Haizhou Li and Benyou Wang},
year={2025},
eprint={2508.14706},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.14706},
}
``` |