The IncTrainer class provides an API to apply compression techniques such as knowledge distillation, pruning and quantization while training the model.
Those compression techniques can be combined easily with our IncTrainer which possess a similar behavior than the Trainer of Transformers:
-from transformers import Trainer
+from optimum.intel.neural_compressor import IncTrainer
-trainer = Trainer(
+trainer = IncTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=data_collator,
)( model: Module = None args: TrainingArguments = None data_collator: typing.Optional[DataCollator] = None train_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None tokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None model_init: Callable = None compute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict], NoneType] = None callbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = None optimizers: Tuple = (None, None) preprocess_logits_for_metrics: Callable = None )
How the distillation loss is computed given the student and teacher outputs.
How the loss is computed by Trainer. By default, all models return the loss in the first element.
Will save the model, so you can reload it using from_pretrained().
Will only save from the main process.
( agent: typing.Optional[neural_compressor.experimental.component.Component] = None resume_from_checkpoint: typing.Union[bool, str, NoneType] = None trial: typing.Union[_ForwardRef('optuna.Trial'), typing.Dict[str, typing.Any]] = None ignore_keys_for_eval: typing.Optional[typing.List[str]] = None **kwargs )
Parameters
Component, optional) —
Component object containing the compression objects to apply during the training process.
Main training entry point.