HyperCLOVAX-SEED-Think-14B-GPTQ

Instruction

This repo contains GPTQ model files for HyperCLOVAX-SEED-Think-14B.

HyperCLOVAX-SEED-Think-14B-GPTQ was quantized using gptqmodel v4.0.0, following the guide.

Model Configuration

Quickstart

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "K-Compression/HyperCLOVAX-SEED-Think-14B-GPTQ"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="bfloat16",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

Performance(Non-Think)

Model MMLU (0-shot) HAERAE (0-shot)
HyperCLOVA X SEED 14B Think 0.7144 0.8130
HyperCLOVA X SEED 14B Think-GPTQ 0.7018 0.8139

License

The model is licensed under HyperCLOVA X SEED Model License Agreement

Downloads last month
184
Safetensors
Model size
3.15B params
Tensor type
I32
·
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for K-Compression/HyperCLOVAX-SEED-Think-14B-GPTQ

Quantized
(1)
this model