Peramanathan commited on
Commit
1491e7d
·
verified ·
1 Parent(s): e95784a

End of training

Browse files
Files changed (1) hide show
  1. README.md +12 -17
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: google/flan-t5-small
5
  tags:
6
  - generated_from_trainer
7
  model-index:
@@ -14,9 +14,9 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # cv-qa-model
16
 
17
- This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 6.8723
20
 
21
  ## Model description
22
 
@@ -35,28 +35,23 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 1e-06
39
- - train_batch_size: 4
40
- - eval_batch_size: 4
41
  - seed: 42
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
- - num_epochs: 10
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
- | 10.0922 | 1.0 | 37 | 7.0720 |
51
- | 9.1794 | 2.0 | 74 | 7.0215 |
52
- | 5.6882 | 3.0 | 111 | 7.0107 |
53
- | 8.585 | 4.0 | 148 | 6.9715 |
54
- | 6.9991 | 5.0 | 185 | 6.9407 |
55
- | 10.3619 | 6.0 | 222 | 6.9142 |
56
- | 7.5525 | 7.0 | 259 | 6.8963 |
57
- | 7.8674 | 8.0 | 296 | 6.8830 |
58
- | 9.2344 | 9.0 | 333 | 6.8747 |
59
- | 7.0138 | 10.0 | 370 | 6.8723 |
60
 
61
 
62
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: google/flan-t5-base
5
  tags:
6
  - generated_from_trainer
7
  model-index:
 
14
 
15
  # cv-qa-model
16
 
17
+ This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 3.3571
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - learning_rate: 2e-05
39
+ - train_batch_size: 2
40
+ - eval_batch_size: 2
41
  - seed: 42
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
+ - num_epochs: 5
45
 
46
  ### Training results
47
 
48
  | Training Loss | Epoch | Step | Validation Loss |
49
  |:-------------:|:-----:|:----:|:---------------:|
50
+ | 5.8487 | 1.0 | 73 | 4.9315 |
51
+ | 9.4588 | 2.0 | 146 | 4.2238 |
52
+ | 6.4005 | 3.0 | 219 | 3.7199 |
53
+ | 5.6368 | 4.0 | 292 | 3.4483 |
54
+ | 3.9503 | 5.0 | 365 | 3.3571 |
 
 
 
 
 
55
 
56
 
57
  ### Framework versions