MultiModalPredictor.export_onnx¶
- MultiModalPredictor.export_onnx(data: dict | DataFrame, path: str | None = None, batch_size: int | None = None, verbose: bool | None = False, opset_version: int | None = 16, truncate_long_and_double: bool | None = False)[source]¶
- Export this predictor’s model to an ONNX file. - When path argument is not provided, the method would not save the model into disk. Instead, it would export the onnx model into BytesIO and return its binary as bytes. - Parameters:
- data – Raw data used to trace and export the model. If this is None, will check if a processed batch is provided. 
- path (str, default=None) – The export path of onnx model. If path is not provided, the method would export model to memory. 
- batch_size – The batch_size of export model’s input. Normally the batch_size is a dynamic axis, so we could use a small value for faster export. 
- verbose – verbose flag in torch.onnx.export. 
- opset_version – opset_version flag in torch.onnx.export. 
- truncate_long_and_double (bool, default False) – Truncate weights provided in int64 or double (float64) to int32 and float32 
 
- Returns:
- onnx_path – A string that indicates location of the exported onnx model, if path argument is provided. Otherwise, would return the onnx model as bytes. 
- Return type:
- str or bytes