|
--- |
|
base_model: |
|
- Columbia-NLP/LION-LLaMA-3-8b-sft-v1.0 |
|
- akjindal53244/Llama-3.1-Storm-8B |
|
- mlabonne/OrpoLlama-3-8B |
|
- meta-llama/Meta-Llama-3-8B-Instruct |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
# merge |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [Columbia-NLP/LION-LLaMA-3-8b-sft-v1.0](https://huggingface.co/Columbia-NLP/LION-LLaMA-3-8b-sft-v1.0) |
|
* [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) |
|
* [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: meta-llama/Meta-Llama-3-8B-Instruct |
|
- model: akjindal53244/Llama-3.1-Storm-8B |
|
parameters: |
|
density: 0.53 |
|
weight: 0.45 |
|
- model: Columbia-NLP/LION-LLaMA-3-8b-sft-v1.0 |
|
parameters: |
|
density: 0.5 |
|
weight: 0.25 |
|
- model: mlabonne/OrpoLlama-3-8B |
|
parameters: |
|
density: 0.5 |
|
weight: 0.15 |
|
|
|
merge_method: dare_ties |
|
base_model: meta-llama/Meta-Llama-3-8B-Instruct |
|
parameters: |
|
normalize: true |
|
int8_mask: true |
|
dtype: bfloat16 |
|
``` |
|
|