Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -7,4 +7,70 @@ language:
|
|
7 |
pretty_name: FineWeb-HQ
|
8 |
size_categories:
|
9 |
- n>1T
|
10 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
pretty_name: FineWeb-HQ
|
8 |
size_categories:
|
9 |
- n>1T
|
10 |
+
---
|
11 |
+
|
12 |
+
# FineWeb-HQ
|
13 |
+
## Dataset Summary
|
14 |
+
|
15 |
+
**FineWeb-HQ** is a **high-quality, model-filtered pretraining dataset** derived as a subset of [**FineWeb**](https://huggingface.co/datasets/HuggingFaceFW/fineweb). FineWeb-HQ was created by selecting the **top 10% of FineWeb documents** based on a deep learning classifier trained to identify **structured and knowledge-rich samples**. This classifier uses **XLM-RoBERTa embeddings** to score documents.
|
16 |
+
|
17 |
+
To validate our approach, we pretrained **1B-parameter LLM models** with a Llama-like architecture across multiple languages and scripts. The results showed **improvements on standard English benchmarks**, with our dataset outperforming its English counterparts [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0-parquet) and [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu). For its multilingual version, **FineWeb2-HQ**, evaluations on **CMMLU (Chinese), MMLU (German), and MMLU (French)** demonstrated that it **matches FineWeb2's performance while trained on 6x fewer tokens** and **surpasses it when fully trained**.
|
18 |
+
|
19 |
+
| Dataset | Ours | DCLM | FW-Edu | FW |
|
20 |
+
| :--- | :---: | :---: | :---: | :---: |
|
21 |
+
| **Average Rank** | 1.8333 | 2.3889 | 2.4444 | 3.3333 |
|
22 |
+
| ARC (Challenge) | 0.3550 | 0.3530 | **0.3850** | 0.3010 |
|
23 |
+
| ARC (Easy) | 0.6670 | 0.6470 | **0.6970** | 0.5880 |
|
24 |
+
| CommonsenseQA | 0.3870 | **0.4100** | 0.3770 | 0.3850 |
|
25 |
+
| HellaSwag | **0.6040** | 0.5960 | 0.5700 | 0.5930 |
|
26 |
+
| MMLU | 0.3400 | 0.3160 | **0.3470** | 0.3030 |
|
27 |
+
| OpenBookQA | 0.3860 | 0.3840 | **0.4180** | 0.3560 |
|
28 |
+
| PIQA | 0.7510 | 0.7510 | 0.7410 | **0.7620** |
|
29 |
+
| WinoGrande | **0.5720** | 0.5610 | 0.5660 | 0.5550 |
|
30 |
+
| TriviaQA | 0.0820 | **0.1240** | 0.0320 | 0.0370 |
|
31 |
+
|
32 |
+
For more details, see our paper [Enhancing Multilingual LLM Pretraining with Model-Based Data Selection](https://arxiv.org/abs/2502.10361).
|
33 |
+
|
34 |
+
## Key features
|
35 |
+
|
36 |
+
- **High-quality selection**: Top 10% of FineWeb documents by quality
|
37 |
+
- **Multilingual version**: [FineWeb2-HQ](https://huggingface.co/datasets/epfml/FineWeb2-HQ)
|
38 |
+
- **Model-based filtering**: Uses an XLM-RoBERTa embedding-based classifier to score documents
|
39 |
+
- **Enhanced benchmark performance**: Surpasses FineWeb benchmark performance and competitive to [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0-parquet) and [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
|
40 |
+
- **Fully open**: Emphasis on transparency
|
41 |
+
|
42 |
+
## Dataset structure
|
43 |
+
|
44 |
+
### Data fields
|
45 |
+
|
46 |
+
Each data entry includes the original [FineWeb data fields](https://huggingface.co/datasets/HuggingFaceFW/fineweb#data-fields) with the addition of:
|
47 |
+
- `quality_score`: quality score obtained by the quality classifier
|
48 |
+
- `embeddings`: array of float arrays containing 768-dimensional XLM-RoBERTa embeddings for every 512 token chunk of the tokenized text
|
49 |
+
|
50 |
+
## Licensing information
|
51 |
+
|
52 |
+
Like FineWeb, this dataset is released under [Open Data Commons Attribution License (ODC-By) v1.0](https://opendatacommons.org/licenses/by/1-0/) license and is subject to [CommonCrawl's Terms of Use](https://commoncrawl.org/terms-of-use).
|
53 |
+
|
54 |
+
## Dataset origin
|
55 |
+
Being a subset of FineWeb (v1.3.0), this data covers websites over the 2013-2024 time period.
|
56 |
+
|
57 |
+
FineWeb is sourced from the internet at large, it is very likely that some personable identifiable information (PII) will be present, even if the FineWeb processing has already anonymized email addresses and public IP addresses. If you find your own PII and would like it removed, please fill out the [FineWeb PII removal/opt out form](https://forms.gle/VyNT3ZAUPZjPuWp39).
|
58 |
+
|
59 |
+
## Considerations for Using the Data
|
60 |
+
Before using this dataset for training models, we recommend performing additional filtering for sensitive content such as PII or harmful content.
|
61 |
+
For the aspects of social impact, discussion of biases, and known limitations, we also refer to the [FineWeb documentation](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
|
62 |
+
|
63 |
+
## Citation information
|
64 |
+
If you use this dataset in your research or applications, please use the following citation:
|
65 |
+
```
|
66 |
+
|
67 |
+
|
68 |
+
@article
|
69 |
+
{messmer2025multilingdatacomp,
|
70 |
+
title={Enhancing Multilingual LLM Pretraining with Model-Based Data Selection},
|
71 |
+
author={Bettina Messmer and Vinko Sabolčec and Martin Jaggi},
|
72 |
+
journal={arXiv},
|
73 |
+
year={2025},
|
74 |
+
url={https://arxiv.org/abs/2502.10361},
|
75 |
+
}
|
76 |
+
```
|