thorirhrafn commited on
Commit
592dc30
·
verified ·
1 Parent(s): 9297959

End of training

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - generated_from_trainer
8
+ base_model: AI-Sweden-Models/gpt-sw3-1.3b
9
+ model-index:
10
+ - name: gpt1B_DPO_model_ver2
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # gpt1B_DPO_model_ver2
18
+
19
+ This model is a fine-tuned version of [AI-Sweden-Models/gpt-sw3-1.3b](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.0069
22
+ - Rewards/chosen: 0.0482
23
+ - Rewards/rejected: -6.7989
24
+ - Rewards/accuracies: 1.0
25
+ - Rewards/margins: 6.8472
26
+ - Logps/rejected: -275.1611
27
+ - Logps/chosen: -112.8062
28
+ - Logits/rejected: -2.7230
29
+ - Logits/chosen: -2.9162
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 1e-05
49
+ - train_batch_size: 1
50
+ - eval_batch_size: 1
51
+ - seed: 42
52
+ - gradient_accumulation_steps: 8
53
+ - total_train_batch_size: 8
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: linear
56
+ - num_epochs: 2
57
+
58
+ ### Training results
59
+
60
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
61
+ |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
62
+ | 0.0089 | 0.79 | 200 | 0.0086 | 0.0829 | -6.3343 | 1.0 | 6.4172 | -270.5143 | -112.4591 | -2.7397 | -2.9321 |
63
+ | 0.0031 | 1.59 | 400 | 0.0069 | 0.0482 | -6.7989 | 1.0 | 6.8472 | -275.1611 | -112.8062 | -2.7230 | -2.9162 |
64
+
65
+
66
+ ### Framework versions
67
+
68
+ - PEFT 0.8.2
69
+ - Transformers 4.38.1
70
+ - Pytorch 2.2.0+cu118
71
+ - Datasets 2.17.1
72
+ - Tokenizers 0.15.2