FastMTP: Accelerating LLM Inference with Enhanced Multi-Token Prediction

Technical report (coming soon) · Github · HuggingFace · ModelScope

Overview

FastMTP is a simple yet effective method that enhances Multi-Token Prediction (MTP) for speculative decoding during inference. Our approach fine-tunes a single MTP head with shared weights across multiple causal draft steps, enabling it to capture longer-range dependencies and achieve higher acceptance rates in speculative decoding. By incorporating language-aware vocabulary compression, we further reduce computational overhead during draft generation. Experimental results across diverse benchmarks demonstrate that FastMTP achieves an average of 2.03× speedup over vanilla next token prediction while maintaining lossless output quality. With low training cost and seamless integration into existing inference frameworks, FastMTP offers a practical and rapidly deployable solution for accelerating LLM inference.

Speedup comparison of different methods across subtasks, evaluated on a single A10 GPU:

What's Included

This repository contains the model checkpoints for FastMTP, and the processed compressed vocabulary.

Links

Downloads last month
170
Safetensors
Model size
7.83B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support