( opset: typing.Optional[int] = None use_external_data_format: bool = False optimization_config: typing.Optional[optimum.onnxruntime.configuration.OptimizationConfig] = None quantization_config: typing.Optional[optimum.onnxruntime.configuration.QuantizationConfig] = None )
Parameters
int, optional) —
ONNX opset version to export the model with.
bool, optional, defaults to False) —
Allow exporting model >= than 2Gb.
OptimizationConfig, optional, defaults to None) —
Specify a configuration to optimize ONNX Runtime model
QuantizationConfig, optional, defaults to None) —
Specify a configuration to quantize ONNX Runtime model
ORTConfig is the configuration class handling all the ONNX Runtime optimization and quantization parameters.