HectorHe commited on
Commit
5b8e024
·
verified ·
1 Parent(s): 6965aa2

Model save

Browse files
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct
3
+ library_name: transformers
4
+ model_name: Deepseek-Coder-V2-Lite-13B-Instruct-sft-s1K
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - sft
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for Deepseek-Coder-V2-Lite-13B-Instruct-sft-s1K
13
+
14
+ This model is a fine-tuned version of [deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="HectorHe/Deepseek-Coder-V2-Lite-13B-Instruct-sft-s1K", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/nkgydwm9)
31
+
32
+
33
+ This model was trained with SFT.
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.18.0.dev0
38
+ - Transformers: 4.52.0.dev0
39
+ - Pytorch: 2.6.0
40
+ - Datasets: 4.0.0
41
+ - Tokenizers: 0.21.4
42
+
43
+ ## Citations
44
+
45
+
46
+
47
+ Cite TRL as:
48
+
49
+ ```bibtex
50
+ @misc{vonwerra2022trl,
51
+ title = {{TRL: Transformer Reinforcement Learning}},
52
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
53
+ year = 2020,
54
+ journal = {GitHub repository},
55
+ publisher = {GitHub},
56
+ howpublished = {\url{https://github.com/huggingface/trl}}
57
+ }
58
+ ```
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 40609159823360.0,
3
+ "train_loss": 0.6996430353867099,
4
+ "train_runtime": 32431.1921,
5
+ "train_samples": 1000,
6
+ "train_samples_per_second": 0.274,
7
+ "train_steps_per_second": 0.034
8
+ }
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 100000,
4
+ "do_sample": true,
5
+ "eos_token_id": 100001,
6
+ "temperature": 0.3,
7
+ "top_p": 0.95,
8
+ "transformers_version": "4.52.0.dev0"
9
+ }
model-00001-of-00007.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fd2883c6a19aa61189e8d55dd0d9aa2ac99c9a6f8bb64b9f929f0eabf2ab71bf
3
  size 4994763632
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1f8a6c11aebce8545d88d2cb434e754b748a173d3cb2507948ed9abd7e88548
3
  size 4994763632
model-00002-of-00007.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d2e23e8c42da16a7e793627e7f894a2e51dc0ba423166a2e8d10a8feeaefcbbe
3
  size 4995044944
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdd561d04acc12f71b3f5e55e0ec4075c4ac1f0c47db557f8ce7faf01cfb8e24
3
  size 4995044944
model-00003-of-00007.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ca84f61062aed3fd278207daf9c143268fd023e74e21292761009f6a75bc2dec
3
  size 4996085000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfe65fce442719d602aad88f48f5ea3aef0e44ee3ef867d246eab3bd5b48dc18
3
  size 4996085000
model-00004-of-00007.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5cba08f4192aef837ab5d82a26c9a5dc429d24ef3a3ebd9b0a0efd0216b5ed98
3
  size 4996085224
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e9e4fb25d3c5e31078d6a313e40e8e9a5d580c6531d6dcd3867ae72b390abd9
3
  size 4996085224
model-00005-of-00007.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5c70cbb7c6e02356d66ce995080308ef5dc0e7d7de1d7a2bac6d8842be550b20
3
  size 4996085224
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc7cc1cc68b50cdac926f499782d9a8d94552c65ff9d1a2977d162d5fed2a89e
3
  size 4996085224
model-00006-of-00007.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:00df6f1f553a6dc4cf8e49c715ecd1a71e0306a6c38d8bee18dc9e38c7791ec3
3
  size 4995045792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5a99332152ed0c9446b9218b400e620c63f4aafb15f67fe7b4bf0773da153fa
3
  size 4995045792
model-00007-of-00007.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c0401807c612aee685c1ff25f1946bab63b528a9370c8b1007f078e7df17d89b
3
  size 1440515736
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79c43ee808aa7ece4669c4c3ca6ddda5e606e14c09229a23583d866df11894a2
3
  size 1440515736
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "total_flos": 40609159823360.0,
3
+ "train_loss": 0.6996430353867099,
4
+ "train_runtime": 32431.1921,
5
+ "train_samples": 1000,
6
+ "train_samples_per_second": 0.274,
7
+ "train_steps_per_second": 0.034
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
training.log CHANGED
@@ -214,3 +214,4 @@ weight_decay=0.0,
214
  )
215
  (lm_head): Linear(in_features=2048, out_features=102400, bias=False)
216
  )
 
 
214
  )
215
  (lm_head): Linear(in_features=2048, out_features=102400, bias=False)
216
  )
217
+ 2025-08-14 03:40:52 - INFO - __main__ - *** Save model ***