File size: 1,799 Bytes
be74e12 8a6168d f3883ee 92ae4d5 f3883ee 8a6168d cfa2dd1 1794b59 cfa2dd1 1794b59 cfa2dd1 8a6168d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
license: apache-2.0
language:
- en
- zh
---
# FastMTP: Accelerating LLM Inference with Enhanced Multi-Token Prediction
<p align="left">
<strong>Technical report (coming soon)</strong> ·
<a href="https://github.com/Tencent-BAC/FastMTP"><strong>Github</strong></a> ·
<a href="https://huggingface.co/TencentBAC/FastMTP"><strong>HuggingFace</strong></a> ·
<a href="https://modelscope.cn/models/TencentBAC/FastMTP"><strong>ModelScope</strong></a>
</p>
## Overview
FastMTP is a simple yet effective method that enhances Multi-Token Prediction (MTP) for speculative decoding during inference. Our approach fine-tunes a single MTP head with shared weights across multiple causal draft steps, enabling it to capture longer-range dependencies and achieve higher acceptance rates in speculative decoding. By incorporating language-aware vocabulary compression, we further reduce computational overhead during draft generation. Experimental results across diverse benchmarks demonstrate that FastMTP achieves an average of 2.03× speedup over vanilla next token prediction while maintaining lossless output quality. With low training cost and seamless integration into existing inference frameworks, FastMTP offers a practical and rapidly deployable solution for accelerating LLM inference.
<!-- {width=50%} -->
<img src="./assets/mtp-overview.png" width="75%">
Speedup comparison of different methods across subtasks, evaluated on a single A10 GPU:
<img src="./assets/radar_chart.png" width="55%">
## What's Included
This repository contains the model checkpoints for FastMTP, and the processed compressed vocabulary.
## Links
- Technical report (coming soon)
- Training & inference code: [GitHub Repository](https://github.com/Tencent-BAC/FastMTP)
|