|
--- |
|
license: agpl-3.0 |
|
datasets: |
|
- bitext/Bitext-customer-support-llm-chatbot-training-dataset |
|
language: |
|
- en |
|
base_model: |
|
- meta-llama/Llama-3.2-1B-Instruct |
|
--- |
|
# Ansah-E1 |
|
|
|
This repository contains a fully merged, 4-bit quantized model built by integrating a customer support adapter into the base [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) model. |
|
|
|
## Model Overview |
|
|
|
- **Base Model:** Llama-3.2-1B-Instruct from Meta |
|
- **Adapter:** Customer Support Chatbot fine-tuned for customer support scenarios |
|
- **Merged Model:** The adapter weights have been fully merged into the base model for streamlined inference |
|
- **Quantization:** The model is quantized to 4-bit for improved efficiency while maintaining performance |
|
|
|
## Usage |
|
|
|
This model behaves like any other Hugging Face model. For example: |
|
|
|
``` |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model = AutoModelForCausalLM.from_pretrained("your_username/Ansah-E1", load_in_4bit=True) |
|
tokenizer = AutoTokenizer.from_pretrained("your_username/Ansah-E1") |
|
|
|
prompt = "I received a damaged product and want to return it. What's the process?" |
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
outputs = model.generate(**inputs, max_new_tokens=200) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
|