Files changed (1) hide show
  1. README.md +58 -45
README.md CHANGED
@@ -1,45 +1,58 @@
1
- ---
2
- base_model:
3
- - JungZoona/T3Q-qwen2.5-14b-v1.2-e2
4
- - sthenno-com/miscii-14b-0218
5
- - Sakalti/Saka-14B
6
- - Qwen/Qwen2.5-14B
7
- - prithivMLmods/Galactic-Qwen-14B-Exp2
8
- library_name: transformers
9
- tags:
10
- - mergekit
11
- - merge
12
-
13
- ---
14
- # merge
15
-
16
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
17
-
18
- ## Merge Details
19
- ### Merge Method
20
-
21
- This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) as a base.
22
-
23
- ### Models Merged
24
-
25
- The following models were included in the merge:
26
- * [JungZoona/T3Q-qwen2.5-14b-v1.2-e2](https://huggingface.co/JungZoona/T3Q-qwen2.5-14b-v1.2-e2)
27
- * [sthenno-com/miscii-14b-0218](https://huggingface.co/sthenno-com/miscii-14b-0218)
28
- * [Sakalti/Saka-14B](https://huggingface.co/Sakalti/Saka-14B)
29
- * [prithivMLmods/Galactic-Qwen-14B-Exp2](https://huggingface.co/prithivMLmods/Galactic-Qwen-14B-Exp2)
30
-
31
- ### Configuration
32
-
33
- The following YAML configuration was used to produce this model:
34
-
35
- ```yaml
36
- models:
37
- - model: sthenno-com/miscii-14b-0218
38
- - model: Sakalti/Saka-14B
39
- - model: prithivMLmods/Galactic-Qwen-14B-Exp2
40
- - model: JungZoona/T3Q-qwen2.5-14b-v1.2-e2
41
- merge_method: model_stock
42
- base_model: Qwen/Qwen2.5-14B
43
- dtype: bfloat16
44
- tokenizer_source: Qwen/Qwen2.5-14B-Instruct
45
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - JungZoona/T3Q-qwen2.5-14b-v1.2-e2
4
+ - sthenno-com/miscii-14b-0218
5
+ - Sakalti/Saka-14B
6
+ - Qwen/Qwen2.5-14B
7
+ - prithivMLmods/Galactic-Qwen-14B-Exp2
8
+ library_name: transformers
9
+ tags:
10
+ - mergekit
11
+ - merge
12
+ language:
13
+ - zho
14
+ - eng
15
+ - fra
16
+ - spa
17
+ - por
18
+ - deu
19
+ - ita
20
+ - rus
21
+ - jpn
22
+ - kor
23
+ - vie
24
+ - tha
25
+ - ara
26
+ ---
27
+ # merge
28
+
29
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
30
+
31
+ ## Merge Details
32
+ ### Merge Method
33
+
34
+ This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) as a base.
35
+
36
+ ### Models Merged
37
+
38
+ The following models were included in the merge:
39
+ * [JungZoona/T3Q-qwen2.5-14b-v1.2-e2](https://huggingface.co/JungZoona/T3Q-qwen2.5-14b-v1.2-e2)
40
+ * [sthenno-com/miscii-14b-0218](https://huggingface.co/sthenno-com/miscii-14b-0218)
41
+ * [Sakalti/Saka-14B](https://huggingface.co/Sakalti/Saka-14B)
42
+ * [prithivMLmods/Galactic-Qwen-14B-Exp2](https://huggingface.co/prithivMLmods/Galactic-Qwen-14B-Exp2)
43
+
44
+ ### Configuration
45
+
46
+ The following YAML configuration was used to produce this model:
47
+
48
+ ```yaml
49
+ models:
50
+ - model: sthenno-com/miscii-14b-0218
51
+ - model: Sakalti/Saka-14B
52
+ - model: prithivMLmods/Galactic-Qwen-14B-Exp2
53
+ - model: JungZoona/T3Q-qwen2.5-14b-v1.2-e2
54
+ merge_method: model_stock
55
+ base_model: Qwen/Qwen2.5-14B
56
+ dtype: bfloat16
57
+ tokenizer_source: Qwen/Qwen2.5-14B-Instruct
58
+ ```