Datasets:
Dataset Card for Scientific Corpus (Cleaned)
This corpus contains ≈11 M English scientific documents cleaned via the DataTrove pipeline. It was used to continue pretraining T5-base (EN‑T5-Sci) before sliding-window materialization. Each document is provided as a row in one of 75 Parquet shards together with extensive per-document QA metadata.
Dataset Details
Uses
Direct Use
- Continued pretraining / domain adaptation of encoder-decoder LMs on scientific text.
- Building scientific QA, summarization, or retrieval benchmarks for English.
Dataset Structure
- Split: single
trainsplit (≈11 M docs). - Fields:
text(string),id(string),metadata(struct with QA metrics such as length, fasttext score, citation counts, publisher/year). - Files: 75 Parquet shards +
stats/summary/*JSONs with descriptive statistics.
Dataset Creation
Curation Rationale
Provide a reproducible, high-quality English scientific corpus for EN‑T5-Sci pretraining and subsequent cross-lingual transfer.
Source Data
- Data Collection: Unpaywall snapshot curated by the DFKI Scilons team (PDF → text via GROBID).
- Processing: DataTrove + custom scripts (citation removal, structural filtering, FastText EN filter ≥0.75, conservative normalization). Outputs include cleaned text and per-document QA metadata.
- Producers: Scientific publishers indexed by Unpaywall; metadata retains publisher/journal/year when available.
Bias, Risks, and Limitations
-domain bias toward STEM fields; humanities/social sciences underrepresented.
- Potential PII leakage; language filter may drop multilingual documents.
- Residual OCR artifacts may remain.
- Downloads last month
- 4
Curated by:
Nikolas Rauscher
Funded by :
DFKI
Shared by :
Nikolas Rauscher
Language(s) (NLP):
English
License:
CC BY-SA 4.0
Collection including rausch/scientific_corpus_cleaned
Collection
6 items
•
Updated
•
1