Datasets:
dataset_info:
- config_name: all
features:
- name: id
dtype: string
- name: source_idx
dtype: int64
- name: source
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 255235338
num_examples: 1047690
- name: validation
num_bytes: 1525747
num_examples: 8405
- name: test
num_bytes: 12081158
num_examples: 62021
download_size: 160700966
dataset_size: 268842243
- config_name: apt
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 556964.1791243993
num_examples: 3723
- name: test
num_bytes: 190920.5678033307
num_examples: 1252
download_size: 240496
dataset_size: 747884.74692773
- config_name: mrpc
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 929171
num_examples: 3668
- name: validation
num_bytes: 104247
num_examples: 408
- name: test
num_bytes: 435510
num_examples: 1725
download_size: 995696
dataset_size: 1468928
- config_name: parade
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1761250
num_examples: 7550
- name: validation
num_bytes: 293719
num_examples: 1275
- name: test
num_bytes: 319262
num_examples: 1357
download_size: 769767
dataset_size: 2374231
- config_name: paws
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int8
splits:
- name: train
num_bytes: 150704304
num_examples: 645652
- name: test
num_bytes: 2332165
num_examples: 10000
download_size: 108607809
dataset_size: 153036469
- config_name: pit2015
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1345346
num_examples: 13063
- name: validation
num_bytes: 462242
num_examples: 4727
- name: test
num_bytes: 94569
num_examples: 972
download_size: 596490
dataset_size: 1902157
- config_name: qqp
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 49445436
num_examples: 363846
- name: test
num_bytes: 5492034
num_examples: 40430
download_size: 34836571
dataset_size: 54937470
- config_name: sick
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 481342
num_examples: 4439
- name: validation
num_bytes: 54519
num_examples: 495
- name: test
num_bytes: 531654
num_examples: 4906
download_size: 347239
dataset_size: 1067515
- config_name: stsb
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 754791
num_examples: 5749
- name: validation
num_bytes: 216064
num_examples: 1500
- name: test
num_bytes: 169974
num_examples: 1379
download_size: 707460
dataset_size: 1140829
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
- split: validation
path: all/validation-*
- split: test
path: all/test-*
- config_name: apt
data_files:
- split: train
path: apt/train-*
- split: test
path: apt/test-*
- config_name: mrpc
data_files:
- split: train
path: mrpc/train-*
- split: validation
path: mrpc/validation-*
- split: test
path: mrpc/test-*
- config_name: parade
data_files:
- split: train
path: parade/train-*
- split: validation
path: parade/validation-*
- split: test
path: parade/test-*
- config_name: paws
data_files:
- split: train
path: paws/train-*
- split: test
path: paws/test-*
- config_name: pit2015
data_files:
- split: train
path: pit2015/train-*
- split: validation
path: pit2015/validation-*
- split: test
path: pit2015/test-*
- config_name: qqp
data_files:
- split: train
path: qqp/train-*
- split: test
path: qqp/test-*
- config_name: sick
data_files:
- split: train
path: sick/train-*
- split: validation
path: sick/validation-*
- split: test
path: sick/test-*
- config_name: stsb
data_files:
- split: train
path: stsb/train-*
- split: validation
path: stsb/validation-*
- split: test
path: stsb/test-*
task_categories:
- text-classification
- sentence-similarity
- text-ranking
- text-retrieval
tags:
- english
- sentence-similarity
- sentence-pair-classification
- semantic-retrieval
- re-ranking
- information-retrieval
- embedding-training
- semantic-search
- paraphrase-detection
language:
- en
size_categories:
- 1M<n<10M
license: apache-2.0
pretty_name: RedisLangCache SentecePairs v1
Redis LangCache Sentence Pairs Dataset
A large, consolidated collection of English sentence pairs for training and evaluating semantic similarity, retrieval, and re-ranking models. It merges widely used benchmarks into a single schema with consistent fields and ready-made splits.
Dataset Details
Dataset Description
- Name: langcache-sentencepairs-v1
- Summary: Sentence-pair dataset created to fine-tune encoder-based embedding and re-ranking models. It combines multiple high-quality corpora spanning diverse styles (short questions, long paraphrases, Twitter, adversarial pairs, technical queries, news headlines, etc.), with both positive and negative examples and preserved splits.
- Curated by: Redis
- Shared by: Aditeya Baral
- Language(s): English
- License: Apache-2.0
- Homepage / Repository: https://huggingface.co/datasets/redis/langcache-sentencepairs-v1
Configs and coverage
all
: Unified view over all sources with extra metadata columns (idx
,source_idx
).- Source-specific configs:
apt
,mrpc
,parade
,paws
,pit2015
,qqp
,sick
,stsb
.
Size & splits (overall)
Total ~1.12M pairs: ~1.05M train, 8.4k validation, 62k test. See per-config sizes in the viewer.
Dataset Sources
- APT (Adversarial Paraphrasing Task) — Paper | Dataset
- MRPC (Microsoft Research Paraphrase Corpus) — Paper | Dataset
- PARADE (Paraphrase Identification requiring Domain Knowledge) — Paper | Dataset
- PAWS (Paraphrase Adversaries from Word Scrambling) — Paper | Dataset
- PIT2015 (SemEval 2015 Twitter Paraphrase) — Website | Dataset
- QQP (Quora Question Pairs) — Website | Dataset
- SICK (Sentences Involving Compositional Knowledge) — Website | Dataset
- STS-B (Semantic Textual Similarity Benchmark) — Website | Dataset
Uses
- Train/fine-tune sentence encoders for semantic retrieval and re-ranking.
- Supervised sentence-pair classification tasks like paraphrase detection.
- Evaluation of semantic similarity and building general-purpose retrieval and ranking systems.
Direct Use
from datasets import load_dataset
# Unified corpus
ds = load_dataset("aditeyabaral-redis/langcache-sentencepairs-v1", "all")
# A single source, e.g., PAWS
paws = load_dataset("aditeyabaral-redis/langcache-sentencepairs-v1", "paws")
# Columns: sentence1, sentence2, label (+ idx, source_idx in 'all')
Out-of-Scope Use
- Non-English or multilingual modeling: The dataset is entirely in English and will not perform well for training or evaluating multilingual models.
- Uncalibrated similarity regression: The STS-B portion has been integerized in this release, so it should not be used for fine-grained regression tasks requiring the original continuous similarity scores.
Dataset Structure
Fields
sentence1
(string) — First sentence.sentence2
(string) — Second sentence.label
(int64) — Task label.1
≈ paraphrase/similar,0
≈ non-paraphrase/dissimilar. For sources with continuous similarity (e.g., STS-B), labels are integerized in this release; consult the source subset if you need original continuous scores.(config
all
only):idx
(string) — Dataset identifier. Follows the pattern langcache_{split}_{row number}.source_idx
(string) — Source-local row id. Follows the pattern {source name}_{split}_{row number}.
Splits
train
,validation
(where available),test
— original dataset splits preserved whenever provided by the source.
Schemas by config
all
: 5 columns (idx
,source_idx
,sentence1
,sentence2
,label
).- All other configs: 3 columns (
sentence1
,sentence2
,label
).
Dataset Creation
Curation Rationale
To fine-tune stronger encoder models for retrieval and re-ranking, we curated a large, diverse pool of labeled sentence pairs (positives & negatives) covering multiple real-world styles and domains. Consolidating canonical benchmarks into a single schema reduces engineering overhead and encourages generalization beyond any single dataset.
Source Data
Data Collection and Processing
- Ingested each selected dataset and preserved original splits when available.
- Normalized to a common schema; no manual relabeling was performed.
- Merged into
all
with addedsource
andsource_idx
for traceability.
Who are the source data producers?
Original creators of the upstream datasets (e.g., Microsoft Research for MRPC, Quora for QQP, Google Research for PAWS, etc.).
Personal and Sensitive Information
The corpus may include public-text sentences that mention people, organizations, or places (e.g., news, Wikipedia, tweets). It is not intended for identifying or inferring sensitive attributes of individuals. If you require strict PII controls, filter or exclude sources accordingly before downstream use.
Bias, Risks, and Limitations
- Label noise: Some sources include noisily labeled pairs (e.g., PAWS large weakly-labeled set).
- Granularity mismatch: STS-B's continuous similarity is represented as integers here; treat with care if you need fine-grained scoring.
- English-only: Not suitable for multilingual evaluation without adaptation.
Recommendations
- Use the
all
configuration for large-scale training, but be aware that some datasets dominate in size (e.g., PAWS, QQP). Apply sampling or weighting if you want balanced learning across domains. - Treat STS-B labels with caution: they are integerized in this release. For regression-style similarity scoring, use the original STS-B dataset.
- This dataset is best suited for training retrieval and re-ranking models. Avoid re-purposing it for unrelated tasks (e.g., user profiling, sensitive attribute prediction, or multilingual training).
- Track the
source
field (in theall
config) during training to analyze how performance varies by dataset type, which can guide fine-tuning or domain adaptation.
Citation
If you use this dataset, please cite the Hugging Face entry and the original upstream datasets you rely on.
BibTeX:
@misc{langcache_sentencepairs_v1_2025,
title = {langcache-sentencepairs-v1},
author = {Baral, Aditeya and Redis},
howpublished = {\url{https://huggingface.co/datasets/aditeyabaral-redis/langcache-sentencepairs-v1}},
year = {2025},
note = {Version 1}
}
Dataset Card Authors
Aditeya Baral