Update README.md
Browse files
README.md
CHANGED
@@ -87,13 +87,13 @@ The following tables summarize the scores obtained by the **GPT2-CosmosQA**.The
|
|
87 |
|
88 |
| Model | Dev Acc | Test Acc |
|
89 |
|:---------------:|:-----:|:-----:|
|
90 |
-
| BERT-FT Multiway
|
91 |
| GPT-FT * | 54.0 | 54.4. |
|
92 |
| GPT2-CosmosQA | 60.3 | 59.7 |
|
93 |
|
94 |
##Inference
|
95 |
|
96 |
-
This project was mainly dedicated to test the common sense understanding of the GPT2-model.We used a Dataset known as CosmosQ requires reasoning beyond the exact text spans in the context.The above results shows that
|
97 |
|
98 |
|
99 |
## Credits
|
|
|
87 |
|
88 |
| Model | Dev Acc | Test Acc |
|
89 |
|:---------------:|:-----:|:-----:|
|
90 |
+
| BERT-FT Multiway*| 68.3.| 68.4 |
|
91 |
| GPT-FT * | 54.0 | 54.4. |
|
92 |
| GPT2-CosmosQA | 60.3 | 59.7 |
|
93 |
|
94 |
##Inference
|
95 |
|
96 |
+
This project was mainly dedicated to test the common sense understanding of the GPT2-model.We used a Dataset known as CosmosQ requires reasoning beyond the exact text spans in the context.The above results shows that GPT2 model is doing better than some of the base line models given that it used a causal language modeling (CLM) objective .
|
97 |
|
98 |
|
99 |
## Credits
|