merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Benchmarks
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
tinyBenchmarks | N/A | |||||||
- tinyArc | 0 | none | 25 | acc_norm | ↑ | 0.5361 | ± | N/A |
- tinyGSM8k | 0 | flexible-extract | 5 | exact_match | ↑ | 0.6066 | ± | N/A |
strict-match | 5 | exact_match | ↑ | 0.5813 | ± | N/A | ||
- tinyHellaswag | 0 | none | 10 | acc_norm | ↑ | 0.6234 | ± | N/A |
- tinyMMLU | 0 | none | 0 | acc_norm | ↑ | 0.5490 | ± | N/A |
- tinyTruthfulQA | 0 | none | 0 | acc | ↑ | 0.5333 | ± | N/A |
- tinyWinogrande | 0 | none | 5 | acc_norm | ↑ | 0.6196 | ± | N/A |
Merge Method
This model was merged using the Linear merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: Qwen/Qwen2.5-7B-Instruct
layer_range: [0, 7]
- model: Qwen/Qwen2.5-Math-7B-Instruct
layer_range: [0, 7]
parameters:
weight: [0.85, 0.15]
- sources:
- model: Qwen/Qwen2.5-7B-Instruct
layer_range: [7, 14]
- model: Qwen/Qwen2.5-Math-7B-Instruct
layer_range: [7, 14]
parameters:
weight: [0.6, 0.4]
- sources:
- model: Qwen/Qwen2.5-7B-Instruct
layer_range: [14, 21]
- model: Qwen/Qwen2.5-Math-7B-Instruct
layer_range: [14, 21]
parameters:
weight: [0.4, 0.6]
- sources:
- model: Qwen/Qwen2.5-7B-Instruct
layer_range: [21, 28]
- model: Qwen/Qwen2.5-Math-7B-Instruct
layer_range: [21, 28]
parameters:
weight: [0.3, 0.7]
merge_method: linear
dtype: float16
- Downloads last month
- 63