Update README.md
Browse files
README.md
CHANGED
|
@@ -23,7 +23,7 @@ It has been trained to decide when to use one of two available tools: `search_do
|
|
| 23 |
|
| 24 |
This model is a Parameter-Efficient Fine-Tuning (PEFT) adaptation of LLaMA-3.2-3B focused on tool use. It employs Low-Rank Adaptation (LoRA) to efficiently fine-tune the base model for function calling capabilities.
|
| 25 |
|
| 26 |
-
- **Developed by:** [Uness.fr]
|
| 27 |
- **Model type:** Fine-tuned LLM (LoRA)
|
| 28 |
- **Language(s) (NLP):** English
|
| 29 |
- **License:** [Same as base model - specify LLaMA 3.2 license]
|
|
@@ -62,8 +62,6 @@ This model should not be used for:
|
|
| 62 |
## Bias, Risks, and Limitations
|
| 63 |
|
| 64 |
- The model inherits biases from the base LLaMA-3.2-3B model
|
| 65 |
-
- Limited to handling French medical multiple-choice questions
|
| 66 |
-
- May struggle with queries that fall outside its medical domain
|
| 67 |
- Performance depends on how similar user queries are to the training data format
|
| 68 |
- There's a strong dependency on the specific prefixing pattern used in training ("Search information about")
|
| 69 |
|
|
|
|
| 23 |
|
| 24 |
This model is a Parameter-Efficient Fine-Tuning (PEFT) adaptation of LLaMA-3.2-3B focused on tool use. It employs Low-Rank Adaptation (LoRA) to efficiently fine-tune the base model for function calling capabilities.
|
| 25 |
|
| 26 |
+
- **Developed by:** [Uness.fr](https://uness.fr)
|
| 27 |
- **Model type:** Fine-tuned LLM (LoRA)
|
| 28 |
- **Language(s) (NLP):** English
|
| 29 |
- **License:** [Same as base model - specify LLaMA 3.2 license]
|
|
|
|
| 62 |
## Bias, Risks, and Limitations
|
| 63 |
|
| 64 |
- The model inherits biases from the base LLaMA-3.2-3B model
|
|
|
|
|
|
|
| 65 |
- Performance depends on how similar user queries are to the training data format
|
| 66 |
- There's a strong dependency on the specific prefixing pattern used in training ("Search information about")
|
| 67 |
|