MDR
MDR

Reputation: 1

Yolov11-seg pytorch converted to ONNX and add pre-processing/post-processing. ONNX model's inference time is slower than pytorch model

I converted yolov11-seg from pytorch to ONNX according to ultralytics official documentation. However, the model does not include pre-processing and post-processing, so I added the ONNX operators by myself through the onnxruntime_extensions package and other pipeline reference files. After adding pre-processing and post-processing to the model, the execution time is much slower than Pytorch model.

name infer_time
yolov11-n seg with pre and post (pytorch) 10.61 ms
yolov11-n seg with pre and post (onnx) 20.16 ms

According to checking each layer one by one, it is found that the execution time of Mask resize is obviously longer after processing.

enter image description here

I use the following software versions

name version
onnx 1.17.0
onnxruntime 1.20.1
onnxruntime-gpu 1.18.0
onnxruntime_extensions 0.13.0
pytroch 2.5.0+cu121

Does ONNX resize have any other Operator that can be replaced or other method to improve it?

Upvotes: 0

Views: 18

Answers (0)

Related Questions