troymyname00
troymyname00

Reputation: 702

Issues with Conversion of keypointrcnn_resnet50_fpn Torchvision Model from ONNX to TensorRT Engine

I am having a lot of difficulties converting a keypointrcnn_resnet50_fpn model from ONNX to TensorRT. I have done an extensive search on how to do so, and couldn't find anything that enables me to generate the engine.

Here's how I exported the ONNX model.

torch.onnx.export(model.cpu(),
                    input_tensor.cpu(),
                    onnx_file_path,
                    export_params = True,
                    do_constant_folding = False,
                    input_names = ['input'],
                    output_names = ['boxes', 'labels', 'scores', 'keypoints', 'keypoints_scores'],
                    dynamic_axes = {'input': {2 : 'height', 3 : 'width'}},
                    opset_version = 19
                    )

Here's how I prepared the ONNX model before conversion, and here's how I tried to convert.

    # load the ONNX model
    onnx_model = onnx.load(onnx_file_path)
    # simplify the model
    model_simp, check = simplify(onnx_model)
    # export the simplified model
    if check == True:
        onnx.save(model_simp, f"_simplified{onnx_file_name}")
    else:
        print("ERROR: Failed to simplify and save model")

    # re-export model suitable for TensorRT conversion
    cmd = f"python3 -m onnxruntime.transformers.optimizer \
            --input=_simplified{onnx_file_name} \
            --output=_optimized{onnx_file_name} \
            "
    subprocess.run(cmd, shell = True)
    cmd = f"python3 -m onnxruntime.quantization.preprocess \
            --input=_optimized{onnx_file_name} \
            --output=_q_preprocessed{onnx_file_name} \
            "
    subprocess.run(cmd, shell = True)

    # re-export model with inferred shape
    reloaded_model = onnx.load('_q_preprocessed_keypointrcnn_resnet50_fpn_o19.onnx')
    onnx.checker.check_model(reloaded_model)
    inferred_model = onnx.shape_inference.infer_shapes(reloaded_model, check_type = True, strict_mode = True, data_prop = True)
    onnx.save(inferred_model, f"_shape_inferred{onnx_file_name}")

    # ====

    # check validity of model prior to conversion
    cmd = f"polygraphy run _shape_inferred{onnx_file_name} --onnxrt"
    subprocess.run(cmd, shell = True)

    # ====

    # generate TensorRT engine
    cmd = f"~/.../TensorRT-10.0.0.6/bin/trtexec \
            --onnx=_shape_inferred{onnx_file_name} \
            --minShapes=input:1x3x512x512 \
            --optShapes=input:2x3x512x512 \
            --maxShapes=input:5x3x512x512 \
            --saveEngine=_final_keypointrcnn_resnet50_fpn.trt \
            --useCudaGraph \
            "
    subprocess.run(cmd, shell = True)

At the last step of conversion to TensorRT engine, I get the following error.

[E] Error[4]: [shapeContext.cpp::operator()::3946] Error Code 4: Shape Error (reshape wildcard -1 has infinite number of solutions or no solution. Reshaping [0,8] to [0,-1,4].)
[E] [TRT] ModelImporter.cpp:826: While parsing node number 447 [Reshape -> "/roi_heads/Reshape_1_output_0"]:
[E] [TRT] ModelImporter.cpp:829: --- Begin node ---
    input: "/roi_heads/Flatten_output_0"
    input: "/roi_heads/Concat_2_output_0"
    output: "/roi_heads/Reshape_1_output_0"
    name: "/roi_heads/Reshape_1"
    op_type: "Reshape"
    attribute {
      name: "allowzero"
      i: 0
      type: INT
    }

Does anyone have suggestions on how to get past this problem? I used netron to look into the architecture, and see that the problem is in this region. I appreciate any suggestion in this topic. Thanks.

enter image description here

Upvotes: 0

Views: 104

Answers (0)

Related Questions