Vivek commited on
Commit
0cd6747
·
1 Parent(s): a2f7c2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -87,13 +87,13 @@ The following tables summarize the scores obtained by the **GPT2-CosmosQA**.The
87
 
88
  | Model | Dev Acc | Test Acc |
89
  |:---------------:|:-----:|:-----:|
90
- | BERT-FT Multiway| 68.3.| 68.4 |
91
  | GPT-FT * | 54.0 | 54.4. |
92
  | GPT2-CosmosQA | 60.3 | 59.7 |
93
 
94
  ##Inference
95
 
96
- This project was mainly dedicated to test the common sense understanding of the GPT2-model.We used a Dataset known as CosmosQ requires reasoning beyond the exact text spans in the context.The above results shows that even so GPT2 model is a model that was pre-trained to predict the next word it can remember long term dependencies
97
 
98
 
99
  ## Credits
 
87
 
88
  | Model | Dev Acc | Test Acc |
89
  |:---------------:|:-----:|:-----:|
90
+ | BERT-FT Multiway*| 68.3.| 68.4 |
91
  | GPT-FT * | 54.0 | 54.4. |
92
  | GPT2-CosmosQA | 60.3 | 59.7 |
93
 
94
  ##Inference
95
 
96
+ This project was mainly dedicated to test the common sense understanding of the GPT2-model.We used a Dataset known as CosmosQ requires reasoning beyond the exact text spans in the context.The above results shows that GPT2 model is doing better than some of the base line models given that it used a causal language modeling (CLM) objective .
97
 
98
 
99
  ## Credits