( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] config_path_or_obj: typing.Union[str, optimum.intel.neural_compressor.config.IncQuantizationConfig] tokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None eval_func: typing.Optional[typing.Callable] = None train_func: typing.Optional[typing.Callable] = None calib_dataloader: typing.Optional[torch.utils.data.dataloader.DataLoader] = None )
( model_name_or_path: str inc_config: typing.Union[optimum.intel.neural_compressor.config.IncQuantizationConfig, str, NoneType] = None config_name: str = None **kwargs ) → quantizer
Parameters
str) —
Repository name in the Hugging Face Hub or path to a local directory hosting the model.
Union[IncQuantizationConfig, str], optional) —
Configuration file containing all the information related to the model quantization.
Can be either:IncQuantizationConfig,IncQuantizationConfig.from_pretrained.str, optional) —
Name of the configuration file.
str, optional) —
Path to a directory in which a downloaded configuration should be cached if the standard cache should
not be used.
bool, optional, defaults to False) —
Whether or not to force to (re-)download the configuration files and override the cached versions if
they exist.
bool, optional, defaults to False) —
Whether or not to delete incompletely received file. Attempts to resume the download if such a file
exists.
str, optional) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision can be any
identifier allowed by git.
DataLoader, optional) —
DataLoader for post-training quantization calibration.
Callable, optional) —
Evaluation function to evaluate the tuning objective.
Callable, optional) —
Training function for quantization aware training approach.
Returns
quantizer
IncQuantizer object.
Instantiate a IncQuantizer object from a configuration file which can either be hosted on huggingface.co or from a local directory path.
( model_name_or_path: str inc_config: typing.Union[optimum.intel.neural_compressor.config.IncOptimizedConfig, str] = None q_model_name: typing.Optional[str] = None input_names: typing.Optional[typing.List[str]] = None batch_size: typing.Optional[int] = None sequence_length: typing.Union[int, typing.List[int], typing.Tuple[int], NoneType] = None num_choices: typing.Optional[int] = -1 **kwargs ) → q_model
Parameters
str) —
Repository name in the Hugging Face Hub or path to a local directory hosting the model.
Union[IncOptimizedConfig, str], optional) —
Configuration file containing all the information related to the model quantization.
Can be either:IncOptimizedConfig,IncOptimizedConfig.from_pretrained.str, optional) —
Name of the state dictionary located in model_name_or_path used to load the quantized model. If
state_dict is specified, the latter will not be used.
List[str], optional) —
List of names of the inputs used when tracing the model. If unset, model.dummy_inputs().keys() are used
instead.
int, optional) —
Batch size of the traced model inputs.
Union[int, List[int], Tuple[int]], optional) —
Sequence length of the traced model inputs. For sequence-to-sequence models with different sequence
lengths between the encoder and the decoder inputs, this must be [encoder_sequence_length, decoder_sequence_length].
int, optional, defaults to -1) —
The number of possible choices for a multiple choice task.
str, optional) —
Path to a directory in which a downloaded configuration should be cached if the standard cache should
not be used.
bool, optional, defaults to False) —
Whether or not to force to (re-)download the configuration files and override the cached versions if
they exist.
bool, optional, defaults to False) —
Whether or not to delete incompletely received file. Attempts to resume the download if such a file
exists.
str, optional) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision can be any
identifier allowed by git.
Dict[str, torch.Tensor], optional) —
State dictionary of the quantized model, if not specified q_model_name will be used to load the
state dictionary.
Returns
q_model
Quantized model.
Instantiate a quantized pytorch model from a given Intel Neural Compressor (INC) configuration file.