Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Korean
ArXiv:
Libraries:
Datasets
pandas
PhnyX commited on
Commit
cd40baf
·
verified ·
1 Parent(s): ef33899

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -1
README.md CHANGED
@@ -33,7 +33,69 @@ size_categories:
33
 
34
  For a better dataset description, please visit: [LINK](https://klue-benchmark.com/) <br>
35
  <br>
36
- **This dataset was prepared by converting KLUENLI dataset** to use it for contrastive training (SimCSE).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  **How to download**
39
 
 
33
 
34
  For a better dataset description, please visit: [LINK](https://klue-benchmark.com/) <br>
35
  <br>
36
+ **This dataset was prepared by converting KLUENLI dataset** to use it for contrastive training (SimCSE). The code used to prepare the data is given below:
37
+
38
+ ```py
39
+ import pandas as pd
40
+ from datasets import load_dataset, concatenate_datasets, Dataset
41
+ from torch.utils.data import random_split
42
+
43
+
44
+ class PrepTriplets:
45
+ @staticmethod
46
+ def make_dataset():
47
+ train_dataset = load_dataset("klue", "nli", split="train")
48
+ val_dataset = load_dataset("klue", "nli", split="validation")
49
+ merged_dataset = concatenate_datasets([train_dataset, val_dataset])
50
+
51
+ triplets_dataset = PrepTriplets._get_triplets(merged_dataset)
52
+
53
+ # Split back into train and validation
54
+ train_size = int(0.9 * len(triplets_dataset))
55
+ val_size = len(triplets_dataset) - train_size
56
+ train_subset, val_subset = random_split(
57
+ triplets_dataset, [train_size, val_size]
58
+ )
59
+
60
+ # Convert Subset objects back to Dataset
61
+ train_dataset = triplets_dataset.select(train_subset.indices)
62
+ val_dataset = triplets_dataset.select(val_subset.indices)
63
+
64
+ return train_dataset, val_dataset
65
+
66
+ @staticmethod
67
+ def _get_triplets(dataset):
68
+ df = pd.DataFrame(dataset)
69
+
70
+ entailments = df[df["label"] == 0]
71
+ contradictions = df[df["label"] == 2]
72
+
73
+ triplets = []
74
+
75
+ for premise in df["premise"].unique():
76
+ entailment_hypothesis = entailments[entailments["premise"] == premise][
77
+ "hypothesis"
78
+ ].tolist()
79
+ contradiction_hypothesis = contradictions[
80
+ contradictions["premise"] == premise
81
+ ]["hypothesis"].tolist()
82
+
83
+ if entailment_hypothesis and contradiction_hypothesis:
84
+ triplets.append(
85
+ {
86
+ "premise": premise,
87
+ "entailment": entailment_hypothesis[0],
88
+ "contradiction": contradiction_hypothesis[0],
89
+ }
90
+ )
91
+
92
+ triplets_dataset = Dataset.from_pandas(pd.DataFrame(triplets))
93
+
94
+ return triplets_dataset
95
+
96
+ # Example usage:
97
+ # PrepTriplets.make_dataset()
98
+ ```
99
 
100
  **How to download**
101