File size: 2,769 Bytes
3f30ec0
 
 
 
 
 
 
 
 
 
 
 
 
2eb36ac
 
3f30ec0
 
 
 
 
 
 
cc18790
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3f30ec0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cc18790
3f30ec0
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
language:
- en
- zh
tags:
- fp8
- quantization
- dynamic
- vision-language
- multimodal
- vllm
- llm-compressor
- internvl3.5
base_model: OpenGVLab/InternVL3_5-1B
base_model_relation: quantized
pipeline_tag: image-text-to-text
inference: false
license: mit
---
# 🔥 InternVL3_5-1B-FP8-Dynamic 🔥
This is a **fp8 dynamic (w8a8)** version of [OpenGVLab/InternVL3_5-1B](https://huggingface.co/OpenGVLab/InternVL3_5-1B), optimized for high-performance inference with vLLM. 
The model utilizes **fp8 dynamic (w8a8)** for optimal performance and deployment.

## Just Run It (vLLM serve)

You can serve the model using vLLM's OpenAI-compatible API server.

```bash
vllm serve brandonbeiler/InternVL3_5-1B-FP8-Dynamic \
    --quantization compressed-tensors \
    --served-model-name internvl3_5-1b \
    --reasoning-parser qwen3 \
    --trust-remote-code \
    --max-model-len 32768 \
    --tensor-parallel-size 1 # Adjust based on your GPU setup
```
**Notes**
- 32k max context length
- reasoning parser ready to go, requires system prompt to run in thinking mode
- still investigating tool calling


## 🚀 Key Features
- **FP8 Dynamic Quantization**: No calibration required, ready to use immediately
- **Vision-Language Optimized**: Specialized quantization recipe that preserves visual understanding
- **vLLM Ready**: Seamless integration with vLLM for production deployment  
- **Memory Efficient**: ~50% memory reduction compared to FP16 original
- **Performance Boost**: Significant faster inference on H100/L40S GPUs
## 📊 Model Details
- **Original Model**: [OpenGVLab/InternVL3_5-1B](https://huggingface.co/OpenGVLab/InternVL3_5-1B)
- **Source Model**: OpenGVLab/InternVL3_5-1B
- **Quantized Model**: InternVL3_5-1B-FP8-Dynamic
- **Quantization Method**: FP8 Dynamic (W8A8)
- **Quantization Library**: [LLM Compressor](https://github.com/vllm-project/llm-compressor) v0.7.1
- **Quantized by**: [brandonbeiler](https://huggingface.co/brandonbeiler)

## 🏗️ Technical Specifications
### Hardware Requirements
- **Inference**: ? VRAM (+ VRAM for context)
- **Supported GPUs**: H100, L40S, A100 (80GB), RTX 4090 (2x for tensor parallelism)
- **GPU Architecture**: Latest NVIDIA GPUs (Ada Lovelace, Hopper and later) and latest AMD GPUs. Recommended for NVIDIA GPUs with compute capability >=9.0 (Hopper and Blackwell)
### Quantization Details
- **Weights**: FP8 E4M3 with dynamic per-tensor scales
- **Activations**: FP8 E4M3 with dynamic per-tensor scales  
- **Preserved Components**: Vision tower, embeddings, mlp1
## 🔬 Package Versions
This model was created using:
```
llmcompressor==0.7.1
compressed-tensors==0.10.2
transformers==4.55.0
torch==2.7.1
vllm==0.10.1.1
```

*Quantized with ❤️ using LLM Compressor for the open-source community*