brandonbeiler's picture
Upload folder using huggingface_hub
3f30ec0 verified
|
raw
history blame
2.76 kB
metadata
language:
  - en
  - zh
tags:
  - fp8
  - quantization
  - dynamic
  - vision-language
  - multimodal
  - vllm
  - llm-compressor
  - internvl3.5
pipeline_tag: image-text-to-text
inference: false
license: mit

🔥 InternVL3_5-1B-FP8-Dynamic 🔥

This is a fp8 dynamic (w8a8) version of OpenGVLab/InternVL3_5-1B, optimized for high-performance inference with vLLM. The model utilizes fp8 dynamic (w8a8) for optimal performance and deployment.

🚀 Key Features

  • FP8 Dynamic Quantization: No calibration required, ready to use immediately
  • Vision-Language Optimized: Specialized quantization recipe that preserves visual understanding
  • vLLM Ready: Seamless integration with vLLM for production deployment
  • Memory Efficient: ~50% memory reduction compared to FP16 original
  • Performance Boost: Significant faster inference on H100/L40S GPUs

📊 Model Details

🔧 Usage

With vLLM (Recommended)

from vllm import LLM, SamplingParams

# Load the quantized model
model = LLM(
    model="brandonbeiler/InternVL3_5-1B-FP8-Dynamic",
    trust_remote_code=True,
    max_model_len=32768, # internvl 3.5 is 32k max context
    tensor_parallel_size=1,  # Adjust based on your GPU setup
)
# Generate response
sampling_params = SamplingParams(temperature=0.6, max_tokens=512) # internvl 3.5 recommends temp 0.6, especially for thinking mode
response = model.generate("Describe this image: <image>", sampling_params)
print(response[0].outputs[0].text)

🏗️ Technical Specifications

Hardware Requirements

  • Inference: ? VRAM (+ VRAM for context)
  • Supported GPUs: H100, L40S, A100 (80GB), RTX 4090 (2x for tensor parallelism)
  • GPU Architecture: Latest NVIDIA GPUs (Ada Lovelace, Hopper and later) and latest AMD GPUs. Recommended for NVIDIA GPUs with compute capability >=9.0 (Hopper and Blackwell)

Quantization Details

  • Weights: FP8 E4M3 with dynamic per-tensor scales
  • Activations: FP8 E4M3 with dynamic per-tensor scales
  • Preserved Components: Vision tower, embeddings, mlp1

🔬 Package Versions

This model was created using:

llmcompressor==0.7.1
compressed-tensors==latest
transformers==4.55.0
torch==2.7.1
vllm==0.10.1.1

Quantized with ❤️ using LLM Compressor for the open-source community