Quantization

🤗 Optimum provides an neural_compressor package that enables you to apply quantization on many model hosted on the 🤗 hub using the Intel Neural Compressor quantization API.

IncQuantizer

The IncQuantizer class allows to apply different quantization approaches such as static, dynamic and aware training quantization using pytorch eager or fx graph mode.

class optimum.intel.IncQuantizer

< >

( config: typing.Union[str, optimum.intel.neural_compressor.configuration.IncQuantizationConfig] eval_func: typing.Optional[typing.Callable] train_func: typing.Optional[typing.Callable] = None calib_dataloader: typing.Optional[torch.utils.data.dataloader.DataLoader] = None )