🤗 Optimum provides an neural_compressor package that enables you to apply quantization on many model hosted on the 🤗 hub using the Intel Neural Compressor quantization API.
The IncQuantizer class allows to apply different quantization approaches such as static, dynamic and aware training quantization using pytorch eager or fx graph mode.
( config: typing.Union[str, optimum.intel.neural_compressor.configuration.IncQuantizationConfig] eval_func: typing.Optional[typing.Callable] train_func: typing.Optional[typing.Callable] = None calib_dataloader: typing.Optional[torch.utils.data.dataloader.DataLoader] = None )