Datasets:
license: odc-by
task_categories:
- text-generation
language:
- en
pretty_name: FineWeb-HQ
size_categories:
- n>1T
FineWeb-HQ
Dataset Summary
FineWeb-HQ is a high-quality, model-filtered pretraining dataset derived as a subset of FineWeb. FineWeb-HQ was created by selecting the top 10% of FineWeb documents based on a deep learning classifier trained to identify structured and knowledge-rich samples. This classifier uses XLM-RoBERTa embeddings to score documents.
To validate our approach, we pretrained 1B-parameter LLM models with a Llama-like architecture across multiple languages and scripts. The results showed improvements on standard English benchmarks, with our dataset outperforming its English counterparts DCLM and FineWeb-Edu. For its multilingual version, FineWeb2-HQ, evaluations on CMMLU (Chinese), MMLU (German), and MMLU (French) demonstrated that it matches FineWeb2's performance while trained on 6x fewer tokens and surpasses it when fully trained.
| Dataset | Ours | DCLM | FW-Edu | FW |
|---|---|---|---|---|
| Average Rank | 1.8333 | 2.3889 | 2.4444 | 3.3333 |
| ARC (Challenge) | 0.3550 | 0.3530 | 0.3850 | 0.3010 |
| ARC (Easy) | 0.6670 | 0.6470 | 0.6970 | 0.5880 |
| CommonsenseQA | 0.3870 | 0.4100 | 0.3770 | 0.3850 |
| HellaSwag | 0.6040 | 0.5960 | 0.5700 | 0.5930 |
| MMLU | 0.3400 | 0.3160 | 0.3470 | 0.3030 |
| OpenBookQA | 0.3860 | 0.3840 | 0.4180 | 0.3560 |
| PIQA | 0.7510 | 0.7510 | 0.7410 | 0.7620 |
| WinoGrande | 0.5720 | 0.5610 | 0.5660 | 0.5550 |
| TriviaQA | 0.0820 | 0.1240 | 0.0320 | 0.0370 |
For more details, see our paper Enhancing Multilingual LLM Pretraining with Model-Based Data Selection.
Key features
- High-quality selection: Top 10% of FineWeb documents by quality
- Multilingual version: FineWeb2-HQ
- Model-based filtering: Uses an XLM-RoBERTa embedding-based classifier to score documents
- Enhanced benchmark performance: Surpasses FineWeb benchmark performance and competitive to DCLM and FineWeb-Edu
- Fully open: Emphasis on transparency
Dataset structure
Data fields
Each data entry includes the original FineWeb data fields with the addition of:
quality_score: quality score obtained by the quality classifierembeddings: array of float arrays containing 768-dimensional XLM-RoBERTa embeddings for every 512 token chunk of the tokenized text
Licensing information
Like FineWeb, this dataset is released under Open Data Commons Attribution License (ODC-By) v1.0 license and is subject to CommonCrawl's Terms of Use.
Dataset origin
Being a subset of FineWeb (v1.3.0), this data covers websites over the 2013-2024 time period.
FineWeb is sourced from the internet at large, it is very likely that some personable identifiable information (PII) will be present, even if the FineWeb processing has already anonymized email addresses and public IP addresses. If you find your own PII and would like it removed, please fill out the FineWeb PII removal/opt out form.
Considerations for Using the Data
Before using this dataset for training models, we recommend performing additional filtering for sensitive content such as PII or harmful content. For the aspects of social impact, discussion of biases, and known limitations, we also refer to the FineWeb documentation.
Citation information
If you use this dataset in your research or applications, please use the following citation:
@article
{messmer2025multilingdatacomp,
title={Enhancing Multilingual LLM Pretraining with Model-Based Data Selection},
author={Bettina Messmer and Vinko Sabolčec and Martin Jaggi},
journal={arXiv},
year={2025},
url={https://arxiv.org/abs/2502.10361},
}