Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -59,7 +59,7 @@ configs:
|
|
59 |
|
60 |
**ProgressGym-HistText** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in.
|
61 |
|
62 |
-
To quote the paper *ProgressGym: Alignment with a Millennium of Moral Progress*:
|
63 |
|
64 |
> Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale.
|
65 |
>
|
@@ -87,7 +87,7 @@ Please note that dimensions of the value embeddings are only chosen for demonstr
|
|
87 |
|
88 |
## Links
|
89 |
|
90 |
-
- **[Paper Preprint]** ProgressGym: Alignment with a Millennium of Moral Progress
|
91 |
- **[Github Codebase]** PKU-Alignment/ProgressGym *(link coming soon)*
|
92 |
- **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa)
|
93 |
- **[PyPI Package]** *(coming soon)*
|
@@ -100,8 +100,8 @@ If the datasets, models, codebase, or framework of ProgressGym help you in your
|
|
100 |
@article{progressgym,
|
101 |
title={ProgressGym: Alignment with a Millennium of Moral Progress},
|
102 |
author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang},
|
103 |
-
journal={arXiv preprint arXiv:2406.
|
104 |
-
eprint={2406.
|
105 |
eprinttype = {arXiv},
|
106 |
year={2024}
|
107 |
}
|
|
|
59 |
|
60 |
**ProgressGym-HistText** is part of the **ProgressGym** framework for research and experimentation on *progress alignment* - the emulation of moral progress in AI alignment algorithms, as a measure to prevent risks of societal value lock-in.
|
61 |
|
62 |
+
To quote the paper *[ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087)*:
|
63 |
|
64 |
> Frontier AI systems, including large language models (LLMs), hold increasing influence over the epistemology of human users. Such influence can reinforce prevailing societal values, potentially contributing to the lock-in of misguided moral beliefs and, consequently, the perpetuation of problematic moral practices on a broad scale.
|
65 |
>
|
|
|
87 |
|
88 |
## Links
|
89 |
|
90 |
+
- **[Paper Preprint]** [ProgressGym: Alignment with a Millennium of Moral Progress](https://arxiv.org/abs/2406.20087)
|
91 |
- **[Github Codebase]** PKU-Alignment/ProgressGym *(link coming soon)*
|
92 |
- **[Huggingface Data & Model Collection]** [PKU-Alignment/ProgressGym](https://huggingface.co/collections/PKU-Alignment/progressgym-666735fcf3e4efa276226eaa)
|
93 |
- **[PyPI Package]** *(coming soon)*
|
|
|
100 |
@article{progressgym,
|
101 |
title={ProgressGym: Alignment with a Millennium of Moral Progress},
|
102 |
author={Tianyi Qiu and Yang Zhang and Xuchuan Huang and Jasmine Xinze Li and Jiaming Ji and Yaodong Yang},
|
103 |
+
journal={arXiv preprint arXiv:2406.20087},
|
104 |
+
eprint={2406.20087},
|
105 |
eprinttype = {arXiv},
|
106 |
year={2024}
|
107 |
}
|