--- base_model: - deepseek-ai/DeepSeek-V3 --- This is the first 4 layers of DeepSeek-V3 with GPTQ quantization style. - Layer 4's all routed experts (256 experts) are quantized to 2-bit - All other Linear layers are quantized to 4-bit (including MLA, dense layer ffn, and shared expert) To load and run this model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer from gptqmodel import GPTQModel, QuantizeConfig, get_best_device pretrained_model_id = "/root/dataDisk/DeepSeek-V3-bf16-4layers" quantized_model_id = "/root/dataDisk/DeepSeek-V3-4bit-4layers" tokenizer = AutoTokenizer.from_pretrained(pretrained_model_id, use_fast=True) device = get_best_device() model = GPTQModel.load(quantized_model_id, device=device, trust_remote_code=True) print(tokenizer.decode(model.generate(**tokenizer("gptqmodel is", return_tensors="pt").to(model.device), max_new_tokens=10)[0])) ```