jlpan commited on
Commit
0d54b2e
·
1 Parent(s): b06a9b9

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -4
README.md CHANGED
@@ -1,10 +1,75 @@
1
  ---
2
- library_name: peft
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
6
- ### Framework versions
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
- - PEFT 0.5.0.dev0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
- - PEFT 0.5.0.dev0
 
 
 
 
1
  ---
2
+ license: bigcode-openrail-m
3
+ base_model: bigcode/starcoder
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: starcoder-finetuned-test_newSnippet
8
+ results: []
9
  ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # starcoder-finetuned-test_newSnippet
15
+
16
+ This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.1728
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
  ## Training procedure
33
 
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 8e-05
38
+ - train_batch_size: 32
39
+ - eval_batch_size: 32
40
+ - seed: 42
41
+ - gradient_accumulation_steps: 8
42
+ - total_train_batch_size: 256
43
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
+ - lr_scheduler_type: cosine
45
+ - lr_scheduler_warmup_steps: 80
46
+ - training_steps: 800
47
 
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss |
51
+ |:-------------:|:-----:|:----:|:---------------:|
52
+ | 4.5787 | 0.06 | 50 | 0.3535 |
53
+ | 0.236 | 0.12 | 100 | 0.1948 |
54
+ | 0.1862 | 1.01 | 150 | 0.1847 |
55
+ | 0.1866 | 1.07 | 200 | 0.1808 |
56
+ | 0.1838 | 1.13 | 250 | 0.1794 |
57
+ | 0.1718 | 2.02 | 300 | 0.1772 |
58
+ | 0.1796 | 2.08 | 350 | 0.1761 |
59
+ | 0.178 | 2.14 | 400 | 0.1762 |
60
+ | 0.1666 | 3.03 | 450 | 0.1743 |
61
+ | 0.1772 | 3.09 | 500 | 0.1739 |
62
+ | 0.1739 | 3.15 | 550 | 0.1746 |
63
+ | 0.1652 | 4.04 | 600 | 0.1731 |
64
+ | 0.1755 | 4.1 | 650 | 0.1731 |
65
+ | 0.1706 | 4.16 | 700 | 0.1735 |
66
+ | 0.1668 | 5.04 | 750 | 0.1728 |
67
+ | 0.1747 | 5.11 | 800 | 0.1728 |
68
+
69
+
70
+ ### Framework versions
71
 
72
+ - Transformers 4.32.0.dev0
73
+ - Pytorch 2.0.1+cu117
74
+ - Datasets 2.12.0
75
+ - Tokenizers 0.13.3