update README
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ base_model:
|
|
9 |
# Model Card for HLLM
|
10 |
|
11 |
[](https://arxiv.org/abs/2409.12740)
|
12 |
-
[](https://github.com/bytedance/HLLM)
|
14 |
|
15 |
This repo is used for hosting HLLM and HLLM-Creator checkpoints.
|
@@ -21,7 +21,12 @@ For more details or tutorials see https://github.com/bytedance/HLLM.
|
|
21 |
- HLLM effectively transfers the world knowledge encoded during the LLM pre-training stage into the recommendation model, encompassing both item feature extraction and user interest modeling. Nevertheless, task-specific fine-tuning with recommendation objectives is essential.
|
22 |
- HLLM exhibits excellent scalability, with performance continuously improving as the data volume and model parameters increase. This scalability highlights the potential of the proposed approach when applied to even larger datasets and model sizes.
|
23 |
|
24 |
-
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
| Method | Dataset | Negatives | R@10 | R@50 | R@200 | N@10 | N@50 | N@200 |
|
27 |
| ------------- | ------- |---------- | ---------- | --------- |---------- | --------- | --------- | --------- |
|
@@ -44,4 +49,11 @@ author={Junyi Chen and Lu Chi and Bingyue Peng and Zehuan Yuan},
|
|
44 |
journal={arXiv preprint arXiv:2409.12740},
|
45 |
year={2024}
|
46 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
```
|
|
|
9 |
# Model Card for HLLM
|
10 |
|
11 |
[](https://arxiv.org/abs/2409.12740)
|
12 |
+
[](https://arxiv.org/abs/2508.18118)
|
13 |
[](https://github.com/bytedance/HLLM)
|
14 |
|
15 |
This repo is used for hosting HLLM and HLLM-Creator checkpoints.
|
|
|
21 |
- HLLM effectively transfers the world knowledge encoded during the LLM pre-training stage into the recommendation model, encompassing both item feature extraction and user interest modeling. Nevertheless, task-specific fine-tuning with recommendation objectives is essential.
|
22 |
- HLLM exhibits excellent scalability, with performance continuously improving as the data volume and model parameters increase. This scalability highlights the potential of the proposed approach when applied to even larger datasets and model sizes.
|
23 |
|
24 |
+
HLLM-Creator is designed for personalized creative generation:
|
25 |
+
- HLLM-Creator enables precise user interest modeling and fine-grained content personalization.
|
26 |
+
- A Chain-of-Thought-based data construction pipeline is developed to expand personalization space and ensure factual consistency, effectively reducing hallucinations in generated titles.
|
27 |
+
- A flexible and efficient inference scheme is developed for large-scale industrial deployment, with significant positive results in Douyin search advertising demonstrating its real-world impact.
|
28 |
+
|
29 |
+
## Comparison with state-of-the-art methods (HLLM)
|
30 |
|
31 |
| Method | Dataset | Negatives | R@10 | R@50 | R@200 | N@10 | N@50 | N@200 |
|
32 |
| ------------- | ------- |---------- | ---------- | --------- |---------- | --------- | --------- | --------- |
|
|
|
49 |
journal={arXiv preprint arXiv:2409.12740},
|
50 |
year={2024}
|
51 |
}
|
52 |
+
|
53 |
+
@article{HLLM-Creator,
|
54 |
+
title={HLLM-Creator: Hierarchical LLM-based Personalized Creative Generation},
|
55 |
+
author={Junyi Chen and Lu Chi and Siliang Xu and Shiwei Ran and Bingyue Peng and Zehuan Yuan},
|
56 |
+
journal={arXiv preprint arXiv:2508.18118},
|
57 |
+
year={2025}
|
58 |
+
}
|
59 |
```
|