Duncan MacLennan
Duncan MacLennan

Reputation: 13

Yolov9 tensorRT not doing inference on Jetson Orin Nano

After running python3 yolov9_trt.py I got this error Ive been following the youtube video on Robot Mania's youtube channel Everything was going fine up until the 14:30 mark

[07/05/2024-14:42:26] [TRT] [W] onnx2trt_utils.cpp:372: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
2024-07-05 14:42:26,214 - INFO - tensorrt_base,create_engine: create engine with FP16
2024-07-05 14:42:26,215 - INFO - tensorrt_base,create_engine: Creating an inference engine, please wait a few minutes!!!
[07/05/2024-15:05:50] [TRT] [W] TensorRT encountered issues when converting weights between types and that could affect accuracy.
[07/05/2024-15:05:50] [TRT] [W] If this is not the desired behavior, please modify the weights or retrain with regularization to adjust the magnitude of the weights.
[07/05/2024-15:05:50] [TRT] [W] Check verbose logs for the list of affected weights.
[07/05/2024-15:05:50] [TRT] [W] - 278 weights are affected by this issue: Detected subnormal FP16 values.
[07/05/2024-15:05:50] [TRT] [W] - 1 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
2024-07-05 15:05:50,138 - INFO - tensorrt_base,create_engine: Creating an inference engine successful!
/home/duncan/yolov9-tensorrt/yolov9_trt.py:175: DeprecationWarning: Use get_tensor_shape instead.
  self.logger.info("bingding shape:{}".format(engine.get_binding_shape(binding)))
2024-07-05 15:05:51,372 - INFO - yolov9_trt,get_trt_model_stream: bingding shape:(1, 3, 640, 640)
/home/duncan/yolov9-tensorrt/yolov9_trt.py:176: DeprecationWarning: Use get_tensor_shape instead.
  size = trt.volume(engine.get_binding_shape(binding))
/home/duncan/yolov9-tensorrt/yolov9_trt.py:177: DeprecationWarning: Use get_tensor_dtype instead.
  dtype = trt.nptype(engine.get_binding_dtype(binding))
/home/duncan/yolov9-tensorrt/yolov9_trt.py:185: DeprecationWarning: Use get_tensor_shape instead.
  self.input_w = engine.get_binding_shape(binding)[-1]
/home/duncan/yolov9-tensorrt/yolov9_trt.py:186: DeprecationWarning: Use get_tensor_shape instead.
  self.input_h = engine.get_binding_shape(binding)[-2]
2024-07-05 15:05:51,380 - INFO - yolov9_trt,get_trt_model_stream: bingding shape:(1, 84, 8400)
2024-07-05 15:05:51,381 - INFO - yolov9_trt,get_trt_model_stream: bingding shape:(1, 84, 8400)
wrapper took 17.6005 ms to execute.
Error in do_infer: 'Yolov9' object has no attribute 'output_dim'
wrapper took 199.6112 ms to execute.
Traceback (most recent call last):
  File "/home/duncan/yolov9-tensorrt/yolov9_trt.py", line 243, in <module>
    draw_detect_results(img, detect_results)
  File "/home/duncan/yolov9-tensorrt/python/draw_AI_results.py", line 24, in draw_detect_results
    for r in results:
TypeError: 'NoneType' object is not iterable
-------------------------------------------------------------------
PyCUDA ERROR: The context stack was not empty upon module cleanup.
-------------------------------------------------------------------
A context was still active when the context stack was being
cleaned up. At this point in our execution, CUDA may already
have been deinitialized, so there is no way we can finish
cleanly. The program will be aborted now.
Use Context.pop() to avoid this problem.
-------------------------------------------------------------------
Aborted (core dumped)

Upvotes: 0

Views: 140

Answers (0)

Related Questions