| license: apache-2.0 | |
| license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4/blob/main/LICENSE | |
| language: | |
| - en | |
| base_model: Qwen/Qwen2.5-7B-Instruct | |
| base_model_relation: quantized | |
| library_name: mlc-llm | |
| pipeline_tag: text-generation | |
| tags: | |
| - chat | |
| 4-bit [GPTQ](https://arxiv.org/abs/2210.17323) quantized version of [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) for use with the [Private LLM app](https://privatellm.app/). | |